[issue46830] Add Find functionality to Squeezed Text viewer
New submission from Jeff Cagle : Squeezed text output currently opens in a viewer whose only functionality is scrolling. Adding the Find widget a la IDLE would make the viewer much more useful. -- assignee: terry.reedy components: IDLE messages: 413761 nosy: Jeff.Cagle, terry.reedy priority: normal severity: normal status: open title: Add Find functionality to Squeezed Text viewer type: enhancement versions: Python 3.11 ___ Python tracker <https://bugs.python.org/issue46830> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45435] delete misleading faq entry about atomic operations
Jeff Allen added the comment: Thomas wrote: > it's as part of this discussion in > https://mail.python.org/archives/list/python-...@python.org/thread/ABR2L6BENNA6UPSPKV474HCS4LWT26GY/#IAOCDDCJ653NBED3G2J2YBWD7HHPFHT6 > and others in #python-dev That's where I noticed it, but it seemed the wrong place to explore this way. Steven is right, I'm over-stating the case. And although valid that this is CPython specific, it's well sign-posted and I'm just being thin-skinned. Serhiy writes: > sort() is atomic, even if GIL is released during executing custom __lt__. It > is guaranteed that no operations on the list in other threads can affect the > result of sort(). The strategy noted here: https://github.com/python/cpython/blob/2d21612f0dd84bf6d0ce35bcfcc9f0e1a41c202d/Objects/listobject.c#L2261-L2265 does guarantee that, which I hadn't noticed. What if during the release of the GIL, another thread appends to L? In my simple experiment I get a ValueError and the modifications are lost. I think that is not thread-safe. Serhiy also writes: > I do not understand what non-atomic you see in x = L[i]. The value of x is > determined by values of L and i at the start of the operation. GIL is not > released during indexing L, and if it is released between indexing and > assignment, it does not affect the result. and Steven: > Does that matter though? I think that's a distinction that makes no difference. > We know that another thread could change the L or the i before the assignment, if they are global. But once the L[i] lookup has occurred, it doesn't matter if they change. It's not going to affect what value gets bound to the x. Fair enough. Atomicity is a bit slippery, I find. It depends where the critical region starts. Thinking again, it's not the assignment that's the issue ... L is pushed i is pushed __getitem__ is called x is popped It is possible, if i and L are accessible to another thread and change after L is pushed, that x is given a value composed from an i and an L that never existed concurrently in the view of the other thread. Straining at gnats here, but atomicity is a strong claim. And on the point about re-ordering and CPUs, I can't imagine re-ordering that effectively changes the order of byte codes. But do CPython threads run in separate CPUs, or is that only when we have multiple interpreters? If so, and L were in a hot memory location (either the variable or its content), this could be inconsistent between threads. Sorry, I don't know the memory coherence CPython has: I know I couldn't rely on it in Java. I'm just arguing that the section gives advice that is *nearly* always right, which is a horrible thing to debug. I'll stop stirring. -- ___ Python tracker <https://bugs.python.org/issue45435> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45435] delete misleading faq entry about atomic operations
Jeff Allen added the comment: I'm interested in Thomas' reasons, but here are some of mine (as far as I understand things): 1. It is specific to one interpreter implemented in C, equipped with a GIL, and on certain assumptions about the byte code interpreter and the implementation of built-ins, that may not hold long-term. 2. In x = L[i], the index and assignment are distinct actions (in today's byte code), allowing L or i to change before x is assigned. This applies to multiple other of the examples. 3. A compiler (even a CPU) is free to re-order operations and cache values in unguessable ways, on the assumption of a single thread. 4. Code written on these principals is fragile. It only takes the replacement of a built-in with sub-class redefining __getitem__ (to support some worthy aim elsewhere in the code) to invalidate it. 5. sort() is not atomic if an element is of a type that overrides comparison in Python. (Nor is modifying a dictionary if __hash__ or __eq__ are redefined.) If you want retain the question, with a better answer, the last sentence is good: "When in doubt, use a mutex!", accompanied by "Always be in doubt." -- nosy: +jeff.allen ___ Python tracker <https://bugs.python.org/issue45435> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44481] Tkinter config() minor documentation bug for shorthand options
New submission from Jeff S : The documentation page https://docs.python.org/3/library/tkinter.html states "Passing the config() method the name of a shorthand option will return a 2-tuple, not 5-tuple." While config() without argument does return a map that yields references like this, if config() is given the shorthand name as an argument, it follows the reference to the long option name and does yield the full 5-tuple. To demonstrate the difference: from tkinter import Tk Tk().config()['bg'] Tk().config('bg') -- components: Tkinter messages: 396301 nosy: spirko priority: normal severity: normal status: open title: Tkinter config() minor documentation bug for shorthand options type: behavior versions: Python 3.6, Python 3.9 ___ Python tracker <https://bugs.python.org/issue44481> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44126] Support cross-compiling of cpython modules using setuptools
Change by Jeff Moguillansky : -- status: open -> closed ___ Python tracker <https://bugs.python.org/issue44126> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44126] Support cross-compiling of cpython modules using setuptools
Jeff Moguillansky added the comment: Is it possible to add support for cross-compiling of cpython modules to setuptools? It seems that currently there's some 3rd party solutions like "crossenv" but they don't seem to work with clang / ndk-toolchain for example. -- status: closed -> open title: Cross Compile CPython Modules -> Support cross-compiling of cpython modules using setuptools ___ Python tracker <https://bugs.python.org/issue44126> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44126] Cross Compile CPython Modules
Jeff Moguillansky added the comment: Thanks for the info, I will forward the question to the setuptools mailing list -- status: open -> closed ___ Python tracker <https://bugs.python.org/issue44126> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44126] Cross Compile CPython Modules
Change by Jeff Moguillansky : -- status: closed -> open ___ Python tracker <https://bugs.python.org/issue44126> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44126] Cross Compile CPython Modules
Change by Jeff Moguillansky : -- title: Cross Compile Cython Modules -> Cross Compile CPython Modules ___ Python tracker <https://bugs.python.org/issue44126> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44126] Cross Compile Cython Modules
Jeff Moguillansky added the comment: Sorry I meant cpython. Distutils is part of cpython? Currently it doesn't seem to support cross compiling? On Thu, May 13, 2021, 1:08 PM Ned Deily wrote: > > Ned Deily added the comment: > > This issue tracker is for issues with cPython and the Python Standard > Library. Cython is a third-party project that is not part of cPython. You > should ask in a Cython forum (see https://cython.org/#development) or a > general forum like StackOverflow. > > -- > nosy: +ned.deily > resolution: -> third party > stage: -> resolved > status: open -> closed > > ___ > Python tracker > <https://bugs.python.org/issue44126> > ___ > -- ___ Python tracker <https://bugs.python.org/issue44126> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue44126] Cross Compile Cython Modules
New submission from Jeff Moguillansky : Hi, I was able to cross-compile Python 3.9.4 for Android. How do I cross-compile cython modules? I found one tool online: https://pypi.org/project/crossenv/ but it doesn't seem to be compatible with android clang? Does cython support cross-compiling modules? -- components: Cross-Build messages: 393599 nosy: Alex.Willmer, jmoguill2 priority: normal severity: normal status: open title: Cross Compile Cython Modules versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue44126> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43437] venv activate bash script has wrong line endings on windows
New submission from Jeff Moguillansky : when running python.exe -m venv on Windows, It creates several activate scripts. The activate bash script has the wrong line endings (it should be unix-style, not windows-style). Bash scripts should always end with unix style line endings -- components: Windows messages: 388276 nosy: jmoguill2, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: venv activate bash script has wrong line endings on windows type: compile error versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue43437> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43173] Python Windows DLL search paths
Jeff Moguillansky added the comment: Thanks for the feedback On Mon, Feb 8, 2021, 8:29 PM Eryk Sun wrote: > > Eryk Sun added the comment: > > > What's the correct way to set the DLL search path when running a python > script? > > If possible, the simplest approach is to put dependent DLLs in the same > directory as the extension module. > > In 3.8+, the search path for the dependent DLLs of a normally imported > extension module includes the following directories: > > * the loaded extension module's directory > * the application directory (e.g. that of python.exe) > * the user DLL search directories that get added by > SetDllDirectory() and AddDllDirectory(), such as with > os.add_dll_directory() > * %SystemRoot%\System32 > > Note that the above list does not include the current working directory or > %PATH% directories. > > > It would be helpful if it listed the actual name of > > the DLL that it cannot find. > > WinAPI LoadLibraryExW() doesn't have an out parameter to get the missing > DLL or procedure name that caused the call to fail. All we have is the > error code to report, such as ERROR_MOD_NOT_FOUND (126) and > ERROR_PROC_NOT_FOUND (127). Using a debugger, you can see the name of the > missing DLL or procedure if loader snaps are enabled for the application. > > -- > nosy: +eryksun > > ___ > Python tracker > <https://bugs.python.org/issue43173> > ___ > -- ___ Python tracker <https://bugs.python.org/issue43173> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue43173] Python Windows DLL search paths
New submission from Jeff Moguillansky : Hi, What's the correct way to set the DLL search path when running a python script? It seems that it doesn't search the directories specified in PATH environment variable. FYI: For debugging the DLL loading issues, I'm using "Process Monitor" from sysinternals: https://docs.microsoft.com/en-us/sysinternals/downloads/procmon, but please share any tips if you have a better approach. Also, the Python error message is not very informative: when loading a python module (built using Cython), if it fails to load a specific DLL, it says "import module failed, DLL not found" but it doesn't say the name of the actual DLL that is not found. It would be helpful if it listed the actual name of the DLL that it cannot find. -- components: Windows messages: 386688 nosy: jmoguill2, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Python Windows DLL search paths type: behavior versions: Python 3.10, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue43173> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42894] Debugging native Python modules on Windows with Visual Studio Toolchain
New submission from Jeff Moguillansky : I have a question regarding debugging native Python modules on Windows, with Visual Studio toolchain: Currently I have a native module (native C code), along with Python API bindings (via Cython), and finally Python code that invokes the native module. I also use various third-party python modules like Pillow, etc. In order to debug on Windows, I have to use the following tricks: 1) Build the native module in Release Mode 2) Disable Compiler Optimization 3) Enable Debug symbols I can't just use Python distutils out of the box, I have to manually modify the build commands to enable Debugging. If I just try to build the native module in Debug mode, I get Visual Studio compile errors related to: not being able to mix code built with different C++ runtime libraries. Some of the 3rd-party Python modules are only available as Release builds (not Debug builds). I'm wondering if anyone has encountered a similar issue, and what's your advice? On Linux, GNU toolchain, this isn't an issue. The toolchain lets you mix release and debug libraries, no problem. -- components: C API messages: 384853 nosy: jmoguill2 priority: normal severity: normal status: open title: Debugging native Python modules on Windows with Visual Studio Toolchain type: compile error versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue42894> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42863] Python venv inconsistent folder structure on windows
Jeff Moguillansky added the comment: Thanks for the feedback, I understand -- ___ Python tracker <https://bugs.python.org/issue42863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42863] Python venv inconsistent folder structure on windows
Jeff Moguillansky added the comment: Maybe we can consider adding additional params and a new code path to python -m venv? This way we don't break any existing functionality? e.g. -includedir=... -libdir=... -bindir=... ? -- ___ Python tracker <https://bugs.python.org/issue42863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42863] Python venv inconsistent folder structure on windows
Jeff Moguillansky added the comment: To give more context regarding this issue: I'm currently using meson build system to generate the pkg-config files, and it seems that the paths "include", "lib" are hardcoded. >From the perspective of the overall system, I think it would simplify >integration and reduce complexity if we normalize folder structures across >platforms instead of having different folder structures. -- ___ Python tracker <https://bugs.python.org/issue42863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42863] Python venv inconsistent folder structure on windows
New submission from Jeff Moguillansky : When creating a virtual environment on windows using venv, the folder structure: "Scripts", "Include", "Lib", is inconsistent with other platforms (e.g. "include", "lib", "bin", etc). This causes various integration issues. For example, suppose we want to build a native C library, and install it to the folder structure generated by the virtual environment. The pkg-config file assumes a folder structure of "include", "lib", "bin", and the generated pkg-config files are inconsistent with the python virtual environment folder structure. Can we have a consistent folder structure across platforms? -- components: Windows messages: 384628 nosy: jmoguill2, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Python venv inconsistent folder structure on windows type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue42863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue42338] Enable Debug Build For Python Native Modules in Windows, with Visual Studio Toolchain
New submission from Jeff Moguillansky : Hi, We developed a Python module that interfaces with native code via Cython. We currently build on Windows with Visual Studio Toolchain. We encounter the following issues when trying to build a debug version: 1) 3rd party modules installed via PIP are Release mode, but Visual Studio toolchain doesn't allow to mix Debug and Release libs. To workaround this issue, we build our module in "Release" mode, with debug symbols enabled, and with compiled optimization disabled (essentially a hack). 2) To build our module we currently use the following hack: step 1: run python.exe setup.py build --compiler=msvc step 2: extract the output step 3: change /Ox to /Od (disable compiler optimization) add /Zi flag to compiler flags (enable debug symbols) add /DEBUG flag to linker flags Please advise what is the best solution? -- components: Build messages: 380861 nosy: jmoguill2 priority: normal severity: normal status: open title: Enable Debug Build For Python Native Modules in Windows, with Visual Studio Toolchain type: compile error versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue42338> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41708] request make uninstall target
New submission from Jeff Scheibly : I went through and ran the make altinstall from the Python3.8.3.tar.gz and after running the .configure --enable-optimizations, I ran make altinstall, which was successful. Would it be possible to get a uninstall target added so that in the case you may need a different version the uninstall would be super quick. -- components: Installation messages: 376325 nosy: jeffs priority: normal severity: normal status: open title: request make uninstall target versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue41708> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40320] Add ability to specify instance of contextvars context to Task() & asyncio.create_task()
New submission from Jeff Laughlin : As a test engineer I want to be able to run async test fixtures and test cases in different async tasks with the same Context. Not a copy; the same specific instance of contextvars.Context(). I do NOT want the task to run in a COPY of the context because I want mutations to the context to be preserved so that I can pass the mutated context into another async task. I do NOT want the task to inherit the potentially polluted global context. class Task currently unconditionally copies the current global context and has no facility for the user to override the context. Therefor I propose adding a context argument to the Task constructor and to create_task() It should be noted that this argument should not be used for "normal" development and only for "weird" stuff like test frameworks. I should also note that Context().run() is not useful here because it is not async and there is no obvious existing async equivalent. This proposal would be roughly equivalent. I should also note that a hack like copying the items from one context to another will not work because it breaks ContextVar set/reset. I tried this. It was a heroic failure. It must be possible to run a task with an exist instance of context and not a copy. Here is a real-world use case: https://github.com/pytest-dev/pytest-asyncio/pull/153/files Here is the hacked Task constructor I cooked up: class Task(asyncio.tasks._PyTask): def __init__(self, coro, *, loop=None, name=None, context=None): ... self._context = context if context is not None else copy_context() self._loop.call_soon(self.__step, context=self._context) asyncio._register_task(self) if folks are on board I can do a PR -- components: asyncio messages: 366722 nosy: Jeff.Laughlin, asvetlov, yselivanov priority: normal severity: normal status: open title: Add ability to specify instance of contextvars context to Task() & asyncio.create_task() type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 ___ Python tracker <https://bugs.python.org/issue40320> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39959] (Possible) bug on multiprocessing.shared_memory
Jeff Fischer added the comment: I've run into the same problem. It appears that the SharedMemory class is assuming that all clients of a segment are child processes from a single parent, and that they inherit the same resource_tracker. If you run separate, unrelated processes, you get a separate resource_tracker for each process. Then, when a process does a close() followed by a sys.exit(), the resource tracker detects a leak and unlinks the segment. In my application, segment names are stored on the local filesystem and a specific process is responsible for unlinking the segment when it is shut down. I was able to get this model to work with the current SharedMemory implementation by having processes which are just doing a close() also call resource_tracker.unregister() directly to prevent their local resource trackers from destroying the segment. I imagine the documentation needs some discussion of the assumed process model and either: 1) a statement that you need to inherit the resource tracker from a parent process, 2) a blessed way to call the resource tracker to manually unregister, or 3) a way to disable the resource tracker when creating the SharedMemory object. -- nosy: +jfischer ___ Python tracker <https://bugs.python.org/issue39959> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39639] Remove Suite node from AST
Jeff Allen added the comment: Jython uses the reference grammar and ASDL as a way to ensure it is Python we approximate, not some subtly different language. The presence of Suite here gives rise to a class (https://github.com/jythontools/jython/blob/v2.7.2b3/src/org/python/antlr/ast/Suite.java) and we actually use instances of it in the compiler (https://github.com/jythontools/jython/blob/v2.7.2b3/src/org/python/compiler/CodeCompiler.java#L2389). It is a bit of a wart, to have a Jython-specific type here: somewhat defeating the object of using the same source. I expect there was a good reason: perhaps there was no better way to express the commonality between Interactive and Module. It was all before my involvement. I would try to avoid needing it in Jython 3, and if we can't, it doesn't look hard to manage the variation our copy. It's not like we copy these files mechanically from from CPython during a build. +1 on removing it. -- nosy: +jeff.allen ___ Python tracker <https://bugs.python.org/issue39639> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39469] Support for relative home path in pyvenv.cfg
Jeff Edwards added the comment: Interesting, I hadn’t realized that it would embed the FQ Executable path, but that does make sense overall. I guess I had always planned on fixing the ‘bin’ directory anyway afterwards, it’s just that the lack of relative home made it significantly harder to encapsulate multiple environments running with the same interpreter without having to do a complete reinstall, and venv did seem like the best and most-pythonic way to do it. I’ll think about it a bit more On Tue, Jan 28, 2020 at 2:33 PM Eryk Sun wrote: > > Eryk Sun added the comment: > > > Suffice to say, is there a significant reason to not allow it? > > It's poorly supported by packaging. In particular, relocating an > environment isn't supported with entry-point scripts, which pip installs > with a fully-qualified shebang. Moreover, entry-point scripts in Windows > are created as exe files (e.g. "pip.exe") that embed the fully-qualified > path of python[w].exe in the environment, plus a zipped __main__.py. For > example, given an environment at "C:\Temp\env", running > "C:\Temp\env\Scripts\pip.exe" in turn spawns a child process with the > command line: "C:\Temp\env\Scripts\python.exe" > "C:\Temp\env\Scripts\pip.exe". This breaks if the environment is renamed or > relocated. > > -- > nosy: +eryksun > > ___ > Python tracker > <https://bugs.python.org/issue39469> > ___ > -- ___ Python tracker <https://bugs.python.org/issue39469> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39469] Support for relative home path in pyvenv.cfg
Jeff Edwards added the comment: I would say they’re not designed to be, but the also aren’t designed to not be portable. This is often useful where open network access isn’t reasonable, so access to Pip/pipx/pipenv is limited at best. Suffice to say, is there a significant reason to not allow it? On Tue, Jan 28, 2020 at 10:28 AM Brett Cannon wrote: > > Brett Cannon added the comment: > > Do note that virtual environments are not designed to be portable in > general, so this would be a fundamental change in the design and purpose of > virtual environments. > > -- > nosy: +brett.cannon, vinay.sajip > > ___ > Python tracker > <https://bugs.python.org/issue39469> > ___ > -- ___ Python tracker <https://bugs.python.org/issue39469> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue39469] Support for relative home path in pyvenv.cfg
New submission from Jeff Edwards : Currently, the interpreter only supports absolute paths for the 'home' directory in the pyvenv.cfg file. While this works when the interpreter is always installed at a fixed location, it impacts the portability of virtual environments and can make it notably more-difficult if multiple virtual environments are shipped with a shared interpreter and are intended to be portable and working in any directory. Many of these issues can be solved for if 'home' can use a directory relative to the directory of the pyvenv.cfg file. This is detected by the presence of a starting '.' in the value. A common use-case for this is that a script-based tool (e.g. black or supervisor) may be shipped with a larger portable application where they are intended to share the same interpreter (to save on deployment size), but may have conflicting dependencies. Since the application only depends on the executable scripts, those packages could be packaged into their own virtual environments with their dependencies. -- components: Interpreter Core messages: 360800 nosy: Jeff.Edwards priority: normal severity: normal status: open title: Support for relative home path in pyvenv.cfg type: enhancement versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue39469> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38926] MacOS: 'Install certificates.command' has no effect
New submission from Jeff Berkowitz : After using the Python-supported installer to install 3.8.0 on my employer-owned Mac running High Sierra (10.13.6), the 'Install Certificates.command' had no apparently effect on the behavior of Python. The behavior before executing the script was that a Python program using urllib3 was unable to verify that public certificate of github.com. Using curl, I could download via the desired URL. But the Python program could not, consistently throwing SSL verify errors instead. I ran the command script several times. I verified that the symlink cert.pem was created in /Library/Frameworks/Python.framework/Versions/3.8/etc/openssl and that it contained "../../lib/python3.8/site-packages/certifi/cacert.pem" and I verified that the latter file had content (4558 lines) and was readable. And that it did contain the root cert for Github. But despite that and despite multiple new shell windows and so on, I could never get Python to regard the certs. I eventually worked around this by: export SSL_CERT_FILE=/etc/ssl/cert.pem. After this, the Python program using urllib3 could verify Github.com's public cert. But as I understand things, this env var is actually regarded by the OpenSSL "C" library itself, not Python.(?) Which, if true, raises the question of why this was necessary. Of course, there's quite likely something in my environment that is causing this. But it would be nice to know what. -- components: macOS messages: 357549 nosy: Jeff Berkowitz, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: MacOS: 'Install certificates.command' has no effect type: behavior versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue38926> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37886] PyStructSequence_UnnamedField not exported
Jeff Robbins added the comment: Editing one line in structseq.h seems to fix the issue. I changed this extern char* PyStructSequence_UnnamedField; to PyAPI_DATA(char*) PyStructSequence_UnnamedField; rebuilt, and now my C extension can use PyStructSequence_UnnamedField. -- Added file: https://bugs.python.org/file48551/structseq.h ___ Python tracker <https://bugs.python.org/issue37886> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37886] PyStructSequence_UnnamedField not exported
New submission from Jeff Robbins : Python 3.8.0b3 has the fixed https://docs.python.org/3/c-api/tuple.html#c.PyStructSequence_NewType, but one of the documented features of PyStructSequence is the special https://docs.python.org/3/c-api/tuple.html#c.PyStructSequence_UnnamedField which is meant to be used for unnamed (and presumably also "hidden") fields. However, this variable is not "exported" (via __declspec(dllexport) or the relevant Python C macro) and so my C extension cannot "import" it and use it. My guess is that this passed testing because the only tests using it are internal modules linked into python38.dll, which are happy with the `extern` in the header: Include\structseq.h:extern char* PyStructSequence_UnnamedField; -- components: Windows messages: 349956 nosy: je...@livedata.com, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: PyStructSequence_UnnamedField not exported type: compile error versions: Python 3.8 ___ Python tracker <https://bugs.python.org/issue37886> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35662] Windows #define _PY_EMULATED_WIN_CV 0 bug
Jeff Robbins added the comment: Steve, sorry to be dense, but I'm unfortunately ignorant as to what tests I ought to be running. The only test I have right now is much too complicated, and I'd rather be running some official regression test that reveals the problem without my app code, if possible. -- ___ Python tracker <https://bugs.python.org/issue35662> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35662] Windows #define _PY_EMULATED_WIN_CV 0 bug
Jeff Robbins added the comment: Steve, I did some more digging into why the native condition variable approach might be causing problems on Windows. Testing my fix revealed that there is at least one place in Modules\overlapped.c that either a) waits too long to call GetLastError(), or b) reveals an expectation that Py_END_ALLOW_THREADS won't change the results of GetLastError(). Py_BEGIN_ALLOW_THREADS ret = GetQueuedCompletionStatus(CompletionPort, &NumberOfBytes, &CompletionKey, &Overlapped, Milliseconds); save_err = GetLastError(); Py_END_ALLOW_THREADS err = ret ? ERROR_SUCCESS : GetLastError(); The problem in this code is that it allows *other* Windows API calls between the original Windows API call (in this case GetQueuedCompletionStatus()) and the call to GetLastError(). If those other Windows API calls change the thread-specific GetLastError state, the info we need is lost. To test for this possibility, I added a diagnostic test right after the code above if (!ret && (err != save_err)) { printf("GetQueuedCompletionStatus returned 0 but we lost the error=%d lost=%d Overlapped=%d\n", save_err, err, (long)Overlapped); } and ran a test that eventually produced this on the console: GetQueuedCompletionStatus returned 0 but we lost the error=258 lost=0 Overlapped=0 error 258 is WAIT_TIMEOUT. The next lines of code are looking for that "error" in order to decide if GetQueuedCompletionStatus failed, or merely timed out. if (Overlapped == NULL) { if (err == WAIT_TIMEOUT) Py_RETURN_NONE; else return SetFromWindowsErr(err); } So the impact of this problem is severe. Instead of returning None to the caller (in this case _poll in asyncio\windows_events.py), it will raise an error! while True: status = _overlapped.GetQueuedCompletionStatus(self._iocp, ms) if status is None: break And, to make things extra confusing, the error raised via SetFromWindowsErr(err) (where err == 0) ends up looking like this: OSError: [WinError 0] The operation completed successfully A valid WAIT_TIMEOUT thus gets converted to a Python error, but also loses the original Windows Error Code of 258, so you are left scratching your head about how a WinError 0 (ERROR_SUCCESS) could have crashed your call to, say, asyncio.run()? (See traceback below.) So either we need to make sure that all calls to GetLastError() are made immediately after the relevant Windows API call, without any intervening other Windows API calls, and thereby prevent case a) above, or as in case b), the GIL code (using either emulated or native condition variables from condvar.h) needs to preserve the Error state. Some code in Python\thread_nt.h in fact does this already, e.g. void * PyThread_get_key_value(int key) { /* because TLS is used in the Py_END_ALLOW_THREAD macro, * it is necessary to preserve the windows error state, because * it is assumed to be preserved across the call to the macro. * Ideally, the macro should be fixed, but it is simpler to * do it here. */ DWORD error = GetLastError(); void *result = TlsGetValue(key); SetLastError(error); return result; } Of course there might be *other* problems associated with using native condition variables on Windows, but this is the only one I've experienced after some fairly heavy use of Python 3.7.2 asyncio on Windows. traceback: asyncio.run(self.main()) File "C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\runners.py", line 43, in run return loop.run_until_complete(main) File "C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\base_events.py", line 571, in run_until_complete self.run_forever() File "C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\base_events.py", line 539, in run_forever self._run_once() File "C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\base_events.py", line 1739, in _run_once event_list = self._selector.select(timeout) File "C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\windows_events.py", line 405, in select self._poll(timeout) File "C:\Users\jeffr\Documents\projects\Python-3.7.2\lib\asyncio\windows_events.py", line 703, in _poll status = _overlapped.GetQueuedCompletionStatus(self._iocp, ms) OSError: [WinError 0] The operation completed successfully -- ___ Python tracker <https://bugs.python.org/issue35662> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35580] Windows IocpProactor: CreateIoCompletionPort 4th arg 0xffffffff -- why is this value the default?
Jeff Robbins added the comment: I don't understand why 0 would be safer. Since asyncio can only service this IOCP from its single threaded event loop, I would have thought 1 would express the intent better. Why not convey to the OS what we're up to, in case that helps it do a better job or reduces resource footprint? -- ___ Python tracker <https://bugs.python.org/issue35580> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35909] Zip Slip Vulnerability
Jeff Knupp added the comment: According to https://snyk.io/research/zip-slip-vulnerability (the source of the paper), Python hasn't been vulnerable since 2014. -- nosy: +jeffknupp ___ Python tracker <https://bugs.python.org/issue35909> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35662] Windows #define _PY_EMULATED_WIN_CV 0 bug
Jeff Robbins added the comment: I searched harder. :-) https://bugs.python.org/issue29871 I see that someone else already noticed this broken function, but I guess it was left broken because of other issues with using condition variables instead of the emulated ones? Still, the code is wrong as written... Jeff On Sat, Jan 5, 2019, 1:11 AM Steve Dower > Steve Dower added the comment: > > There's an existing issue for this somewhere - we've tried a couple times > to switch over and run into various issues. I'm not in a place to find it > right now, but worth looking. > > -- > > ___ > Python tracker > <https://bugs.python.org/issue35662> > ___ > -- ___ Python tracker <https://bugs.python.org/issue35662> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35662] Windows #define _PY_EMULATED_WIN_CV 0 bug
Jeff Robbins added the comment: I did a search and couldn't find exactly this issue. This issue is about a broken function. It is broken because it treats a timeout as a fatal error which crashes your Python program. I supplied a proposed fix for the function. If there are other known issues or tests, happy to dig in. Seems a shame that Python 3 on Windows needs to be running on emulated condition variables when the OS has (apparently) working actual ones. Jeff On Sat, Jan 5, 2019, 1:11 AM Steve Dower > Steve Dower added the comment: > > There's an existing issue for this somewhere - we've tried a couple times > to switch over and run into various issues. I'm not in a place to find it > right now, but worth looking. > > -- > > ___ > Python tracker > <https://bugs.python.org/issue35662> > ___ > -- ___ Python tracker <https://bugs.python.org/issue35662> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35662] Windows #define _PY_EMULATED_WIN_CV 0 bug
New submission from Jeff Robbins : Python 3.x defaults to using emulated condition variables on Windows. I tested a build with native Windows condition variables (#define _PY_EMULATED_WIN_CV 0), and found a serious issue. The problem is in condvar.h, in this routine: /* This implementation makes no distinction about timeouts. Signal * 2 to indicate that we don't know. */ Py_LOCAL_INLINE(int) PyCOND_TIMEDWAIT(PyCOND_T *cv, PyMUTEX_T *cs, long long us) { return SleepConditionVariableSRW(cv, cs, (DWORD)(us/1000), 0) ? 2 : -1; } The issue is that `SleepConditionVariableSRW` returns FALSE in the timeout case. PyCOND_TIMEDWAIT returns -1 in that case. But... COND_TIMED_WAIT, which calls PyCOND_TIMEDWAIT, in ceval_gil.h, fatals(!) on a negative return value #define COND_TIMED_WAIT(cond, mut, microseconds, timeout_result) \ { \ int r = PyCOND_TIMEDWAIT(&(cond), &(mut), (microseconds)); \ if (r < 0) \ Py_FatalError("PyCOND_WAIT(" #cond ") failed"); \ I'd like to suggest that we use the documented behavior of the OS API call already being used (SleepConditionVariableSRW https://docs.microsoft.com/en-us/windows/desktop/api/synchapi/nf-synchapi-sleepconditionvariablesrw) and return 0 on regular success and 1 on timeout, like in the _POSIX_THREADS case. """ Return Value If the function succeeds, the return value is nonzero. If the function fails, the return value is zero. To get extended error information, call GetLastError. If the timeout expires the function returns FALSE and GetLastError returns ERROR_TIMEOUT. """ I've tested this rewrite -- the main difference is in the FALSE case, check GetLastError() for ERROR_TIMEOUT and then *do not* treat this as a fatal error. /* * PyCOND_TIMEDWAIT, in addition to returning negative on error, * thus returns 0 on regular success, 1 on timeout */ Py_LOCAL_INLINE(int) PyCOND_TIMEDWAIT(PyCOND_T *cv, PyMUTEX_T *cs, long long us) { BOOL result = SleepConditionVariableSRW(cv, cs, (DWORD)(us / 1000), 0); if (result) return 0; if (GetLastError() == ERROR_TIMEOUT) return 1; return -1; } I've attached the test I ran to reproduce the crash. -- components: Windows files: thread_test2.py messages: 333036 nosy: je...@livedata.com, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows #define _PY_EMULATED_WIN_CV 0 bug type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48024/thread_test2.py ___ Python tracker <https://bugs.python.org/issue35662> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35599] asyncio windows_events.py IocpProactor bug
Jeff Robbins added the comment: This issue is likely a duplicate of https://bugs.python.org/issue34323 which was reported in Python 3.5. -- ___ Python tracker <https://bugs.python.org/issue35599> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35599] asyncio windows_events.py IocpProactor bug
New submission from Jeff Robbins : The close() method of IocpProactor in windows_events.py has this code in its close() method: while self._cache: if not self._poll(1): logger.debug('taking long time to close proactor') The bug is that self._poll() has *no* return statements in it, and so returns None no matter what. Which makes the "if not" part confusing, at best. At worst, it might reflect a disconnect with the author's intent. I added a bit more logging and re-ran my test: while self._cache: logger.debug('before self._poll(1)') if not self._poll(1): logger.debug('taking long time to close proactor') logger.debug(f'{self._cache}') logger output: 20:16:30.247 (D) MainThread asyncio: before self._poll(1) 20:16:30.248 (D) MainThread asyncio: taking long time to close proactor 20:16:30.249 (D) MainThread asyncio: {} Obviously 1 millisecond isn't "taking a long time to close proactor". Also of interest, the _cache is now empty. I think the intent of the author must have been to check if the call to ._poll() cleared out any possible pending futures, or waited the full 1 second. Since ._poll() doesn't return any value to differentiate if it waited the full wait period or not, the "if" is wrong, and, I think, the intent of the author isn't met by this code. But, separate from speculating on "intent", the debug output of "taking a long time to close proactor" seems wrong, and the .close() code seems disassociated with the implementation of ._poll() in the same class IocpProactor in windows_events.py. -- components: asyncio messages: 332632 nosy: asvetlov, je...@livedata.com, yselivanov priority: normal severity: normal status: open title: asyncio windows_events.py IocpProactor bug versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue35599> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35580] Windows IocpProactor: CreateIoCompletionPort 4th arg 0xffffffff -- why is this value the default?
Jeff Robbins added the comment: Per https://stackoverflow.com/questions/38133870/how-the-parameter-numberofconcurrentthreads-is-used-in-createiocompletionport, it seems that `NumberOfConcurrentThreads` controls what happens when multiple threads call `GetQueuedCompletionStatus`. But since a given instance of `IocpProactor` only calls `GetQueuedCompletionStatus` from a single thread, probably this arg doesn't matter, and the value `1` would be more explicit about the pattern asyncio is using? A huge number is, presumably, either not relevant or, at worst, wasteful of some kernel resource. Am I correct that only one thread calls `GetQueuedCompletionStatus` on a given `iocp` object in asyncio under Windows `IocpProactor`? -- ___ Python tracker <https://bugs.python.org/issue35580> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35580] Windows IocpProactor: CreateIoCompletionPort 4th arg 0xffffffff -- why is this value the default?
New submission from Jeff Robbins : By default, the __init__ function of IocpProactor in windows_events.py calls CreateIoCompletionPort with a 4th argument of 0x, yet MSDN doesn't document this as a valid argument. https://docs.microsoft.com/en-us/windows/desktop/fileio/createiocompletionport It looks like the 4th arg (NumberOfConcurrentThreads) is meant to be either a positive integer or 0. 0 is a special value meaning "If this parameter is zero, the system allows as many concurrently running threads as there are processors in the system." Why does asyncio use 0x instead as the default value? -- components: asyncio messages: 332498 nosy: asvetlov, je...@livedata.com, yselivanov priority: normal severity: normal status: open title: Windows IocpProactor: CreateIoCompletionPort 4th arg 0x -- why is this value the default? versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue35580> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16038] ftplib: unlimited readline() from connection
Jeff Dafoe added the comment: I have a question about this old patch, as it just came down in a CentOS 6 update. I think the patch is applied to the data channel in ASCII mode and not just the control channel. On the data channel in ASCII mode, there should be no assumption of maximum line length before EOL. I saw that your current value came from vsftpd's header file. I'm guessing if you look at the implementation, it's either only applied to the control channel or it's just used to set a single read size inside of a loop. Examples of ASCII mode files that can exceed nearly any MAXLINE value without EOL are XML files or EDI files. -- nosy: +Jeff Dafoe ___ Python tracker <https://bugs.python.org/issue16038> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33224] "RuntimeError: generator raised StopIteration" in difflib.mdiff
New submission from Jeff Kaufman : With python built at HEAD (c51d8c9b) and at 3.7b3 (fcd4e03e08) the code: import difflib for fromdata, todata, flag in difflib._mdiff( ["2"], ["3"], 1): pass produces: Traceback (most recent call last): File "/home/jefftk/cpython/Lib/difflib.py", line 1638, in _mdiff from_line, to_line, found_diff = next(line_pair_iterator) StopIteration The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/jefftk/icdiff/repro.py", line 3, in ["2"], ["3"], 1): RuntimeError: generator raised StopIteration In python 3.5 and 3.6 I don't get an error. This is probably due to https://bugs.python.org/issue32670 which implements PEP 479, but I this this isn't supposed to happen in library code? -- components: Library (Lib) messages: 314936 nosy: Jeff.Kaufman priority: normal severity: normal status: open title: "RuntimeError: generator raised StopIteration" in difflib.mdiff type: behavior versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue33224> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33088] Cannot pass a SyncManager proxy to a multiprocessing subprocess on Windows
New submission from Jeff DuMonthier : The following simple example code creates a started SyncManager and passes it as an argument to a subprocess started with multiprocessing.Process(). It works on Linux and Mac OS but fails on Windows. import multiprocessing as mp def subProcFn(m1): pass if __name__ == "__main__": __spec__ = None m1 = mp.Manager() p1 = mp.Process(target=subProcFn, args=(m1,)) p1.start() p1.join() This is the traceback in Spyder: runfile('D:/ManagerBug.py', wdir='D:') Traceback (most recent call last): File "", line 1, in runfile('D:/ManagerBug.py', wdir='D:') File "...\anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile execfile(filename, namespace) File "...\anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "D:/ManagerBug.py", line 22, in p1.start() File "...\anaconda3\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "...\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "...\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "...\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__ reduction.dump(process_obj, to_child) File "...\anaconda3\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: can't pickle weakref objects -- components: Windows messages: 313964 nosy: jjdmon, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Cannot pass a SyncManager proxy to a multiprocessing subprocess on Windows type: behavior versions: Python 3.6 ___ Python tracker <https://bugs.python.org/issue33088> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21060] Better error message for setup.py upload command without sdist/bdist
Jeff Ramnani added the comment: I think the error message you suggested is better than the one in the current patch. I've added a new patch with your improved message. I haven't submitted or updated a patch since the migration to GitHub. I can open a PR over on GitHub if that would make it easier for you. -- Added file: https://bugs.python.org/file47437/issue21060-py38.patch ___ Python tracker <https://bugs.python.org/issue21060> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32520] error writing to file in binary mode - python 3.6.3
New submission from jeff deifik : I am running python 3.6.3 on cygwin / windows. Here is a test program to demonstrate the bug: #!/usr/bin/env python3 fp = open("bug_out.txt", "ab") buff = 'Hello world' print('type of buff is', type(buff)) bin_buff = bytes(buff, 'utf-8') print('type of bin_buff is', type(bin_buff)) print(bin_buff, file=fp) Here is the output: ./bug.py type of buff is type of bin_buff is Traceback (most recent call last): File "./bug.py", line 8, in print(bin_buff, file=fp) TypeError: a bytes-like object is required, not 'str' The python type system things bin_buff has type bytes, but when I try to print it, the print function thinks it is of type str. -- components: IO messages: 309669 nosy: lopgok priority: normal severity: normal status: open title: error writing to file in binary mode - python 3.6.3 versions: Python 3.6 ___ Python tracker <https://bugs.python.org/issue32520> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12851] ctypes: getbuffer() never provides strides
Change by Jeff VanOss : -- keywords: +patch pull_requests: +4624 stage: needs patch -> patch review ___ Python tracker <https://bugs.python.org/issue12851> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31630] math.tan has poor accuracy near pi/2 on OpenBSD
Change by Jeff Allen : -- nosy: +jeff.allen ___ Python tracker <https://bugs.python.org/issue31630> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30275] pickle doesn't work in compile/exec
New submission from Jeff Zhang: I want to use pickle in compile/exec, but it doesn't work for me. It only works when I use the global namespace. But I don't want to use global namespace, is there any way for that ? Thanks >>> a = compile("def f():\n\t'hello'\nimport pickle\npickle.dumps(f)", >>> "", "exec") >>> exec(a)# works >>> exec(a, {})# fails Traceback (most recent call last): File "", line 1, in File "", line 4, in _pickle.PicklingError: Can't pickle : it's not the same object as __main__.f >>> exec(a, {'__name__': '__main__'}) # fails too Traceback (most recent call last): File "", line 1, in File "", line 4, in _pickle.PicklingError: Can't pickle : it's not the same object as __main__.f -- components: Library (Lib) messages: 293032 nosy: Jeff Zhang priority: normal severity: normal status: open title: pickle doesn't work in compile/exec type: enhancement versions: Python 3.5 ___ Python tracker <http://bugs.python.org/issue30275> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30256] Adding a SyncManager Queue proxy to a SyncManager dict or Namespace proxy raises an exception
New submission from Jeff DuMonthier: In multiprocessing, attempting to add a Queue proxy to a dict or Namespace proxy (all returned by the same SyncManager) raises an exception indicating a keyword argument 'manager_owned=True' has been passed to the function AutoProxy() but is not an argument of that function. In lib/python3.6/multiprocessing/managers.py, in function RebuildProxy(), line 873: "kwds['manager_owned'] = True" adds this argument to a keyword argument dictionary. This function calls AutoProxy which has an argument list defined on lines 909-910 as: def AutoProxy(token, serializer, manager=None, authkey=None, exposed=None, incref=True): This raises an exception because there is no manager_owned argument defined. I added "manager_owned=False" as a keyword argument to AutoProxy which seems to have fixed the problem. There is no exception and I am able to pass Queue proxies through dict and Namespace proxies to other processes and use them. I don't know the purpose of that argument though or if the AutoProxy function should actually use it for something. My fix allows but ignores it. -- components: Library (Lib) messages: 292889 nosy: jjdmon priority: normal severity: normal status: open title: Adding a SyncManager Queue proxy to a SyncManager dict or Namespace proxy raises an exception versions: Python 3.6 ___ Python tracker <http://bugs.python.org/issue30256> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29463] Add `docstring` attribute to AST nodes
Jeff Allen added the comment: Just terminology ... strictly speaking what you've done here is "add a *field* to the nodes Module, FunctionDef and ClassDef", rather than add an *attribute* -- that is, when one is consistent with the terms used in the ast module (https://docs.python.org/3/library/ast.html#node-classes) or Wang (https://docs.python.org/devguide/compiler.html#wang97). -- nosy: +jeff.allen ___ Python tracker <http://bugs.python.org/issue29463> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29081] time.strptime() return wrong result
Jeff Knupp added the comment: I believe this is working as intended. Remember, the '%w' directive instructs strptime to consider 0 to be Sunday, while tm_wday considers 0 Monday. In 2016, the %W directive means that the first week (week #1) starts on Monday, January 4th. If you go 52 weeks forward from the 4th, you get to Monday, December 26th. By asking for day 0 (%w=0), you want the *Sunday* of the 52nd week *from the first Monday of the year*. Since Monday is day 0 of that week, you want the Sunday that is 6 days from the 26th, or Jan 1, 2017. One can certainly argue that tm_yday is documented to return an integer between [0,366] and thus we should never see 367, but it's the correct value given your input. The only other choice would be to raise an exception, which definitely shouldn't happen since the values you entered clearly match the format string spec. Perhaps the docs should be updated, but when you consider that %W goes from [0,53], tm_yday can go well past 366 and still represent a semantically valid value. -- nosy: +jeffknupp ___ Python tracker <http://bugs.python.org/issue29081> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28730] __reduce__ not being called in dervied extension class from datetime.datetime
Jeff Reback added the comment: ok thanks for the info. fixed in pandas here: https://github.com/pandas-dev/pandas/pull/14689 is this documented in the whatsnew? -- ___ Python tracker <http://bugs.python.org/issue28730> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28730] __reduce__ not being called in dervied extension class from datetime.datetime
New submission from Jeff Reback: xref to https://github.com/pandas-dev/pandas/issues/14679. pandas has had a cython extension class to datetime.datetime for quite some time. A simple __reduce__ is defined. def __reduce__(self): object_state = self.value, self.freq, self.tzinfo print(object_state) return (Timestamp, object_state) In 3.5.2: Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:52:12) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> import pickle >>> pickle.dumps(pd.Timestamp('20130101')) (13569984000, None, None) b'\x80\x03cpandas.tslib\nTimestamp\nq\x00\x8a\x08\x00\x00\xc6\xe8\xda\x06\xd5\x12NN\x87q\x01Rq\x02.' But in 3.6.03b Python 3.6.0b3 | packaged by conda-forge | (default, Nov 2 2016, 03:28:12) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.54)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> import pickle >>> pickle.dumps(pd.Timestamp('20130101')) b'\x80\x03cpandas.tslib\nTimestamp\nq\x00C\n\x07\xdd\x01\x01\x00\x00\x00\x00\x00\x00q\x01\x85q\x02Rq\x03.' So it appears __reduce__ is no longer called at all (I tried defining __getstate__, __getnewargs__ as well, but to no avail). Instead it looks like datetime.datetime.__reduce__ (well a c function is actually called). Link to the codebase. https://github.com/pandas-dev/pandas/blob/master/pandas/tslib.pyx#L490 -- components: IO messages: 281070 nosy: Jeff Reback priority: normal severity: normal status: open title: __reduce__ not being called in dervied extension class from datetime.datetime type: behavior versions: Python 3.6 ___ Python tracker <http://bugs.python.org/issue28730> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26040] Improve coverage and rigour of test.test_math
Jeff Allen added the comment: Ah, cunning: I can make sense of it in hex. >>> hex(to_ulps(expected)) '0x3ff0' >>> hex(to_ulps(got)) '0x3fec' >>> hex( to_ulps(got) - to_ulps(expected) ) '-0x4' ... and what you've done with ulp then follows. In my version a format like "{:d} ulps" was a bad idea when the error was a gross one, but your to_ulps is only piece-wise linear -- large differences are compressed. I'm pleased my work has mostly survived: here's hoping the house build-bots agree. erfc() is perhaps the last worry, but math & cmath pass on my machine. -- ___ Python tracker <https://bugs.python.org/issue26040> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26040] Improve coverage and rigour of test.test_math
Jeff Allen added the comment: Mark: Thanks for doing my homework. Points 1 and 3 I can readily agree with. I must take another look at to_ulps() with your patch on locally. I used the approach I did because I thought it was incorrect in exactly those corners where you prefer it. I'll take a closer look. -- ___ Python tracker <https://bugs.python.org/issue26040> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26040] Improve coverage and rigour of test.test_math
Jeff Allen added the comment: Mark: Thanks for validating the additional cases so carefully. If you still want to apply it in stages then I suppose the change to the comparison logic could go first (untested idea), although that's also where I could most easily have made a mistake. -- ___ Python tracker <https://bugs.python.org/issue26040> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27427] Add new math module tests
Jeff Allen added the comment: It would be nice to see this considered alongside #26040. -- nosy: +jeff.allen ___ Python tracker <http://bugs.python.org/issue27427> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26956] About Idle-x version not updated for 1.13
New submission from Jeff Peters: The help| about idleX | popup still lists the version as 1.12 this throws off the check for updats functionality -- components: IDLE messages: 264838 nosy: Jeff Peters priority: normal severity: normal status: open title: About Idle-x version not updated for 1.13 versions: Python 3.4 ___ Python tracker <http://bugs.python.org/issue26956> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26040] Improve coverage and rigour of test.test_math
Changes by Jeff Allen : Added file: http://bugs.python.org/file42191/extra_cmath_testcases.py ___ Python tracker <http://bugs.python.org/issue26040> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26040] Improve coverage and rigour of test.test_math
Changes by Jeff Allen : Removed file: http://bugs.python.org/file42190/stat_math.py ___ Python tracker <http://bugs.python.org/issue26040> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26040] Improve coverage and rigour of test.test_math
Changes by Jeff Allen : Added file: http://bugs.python.org/file42192/stat_math.py ___ Python tracker <http://bugs.python.org/issue26040> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26040] Improve coverage and rigour of test.test_math
Changes by Jeff Allen : Removed file: http://bugs.python.org/file41526/stat_math.py ___ Python tracker <http://bugs.python.org/issue26040> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26040] Improve coverage and rigour of test.test_math
Jeff Allen added the comment: Thanks for the prompt acknowledgement and for accepting this to review. I have updated the coverage & tolerance demo program. Usage in the comments (in v3). I have also added the program I used to generate the extra test cases (needs mpmath -- easier to get working than mpf in the original Windows/Jython environment). -- ___ Python tracker <http://bugs.python.org/issue26040> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26040] Improve coverage and rigour of test.test_math
Changes by Jeff Allen : Added file: http://bugs.python.org/file42190/stat_math.py ___ Python tracker <http://bugs.python.org/issue26040> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26040] Improve coverage and rigour of test.test_math
Jeff Allen added the comment: Here is a patch that improves coverage and addresses the uneven accuracy. Required accuracy is now specified in ulps. Mostly, I have choses 1 ulp, since this passed for me on an x86 architecture (and also ARM), but this may be too ambitious. I have also responded to the comment relating to erfc: # XXX Would be better to weaken this test only # for large x, instead of for all x." I found I could not contribute the code I used to generate the additional test cases in Tools/scripts without failing test_tools. (It complained of a missing dependency. The generator uses mpmath.) -- keywords: +patch Added file: http://bugs.python.org/file42166/iss26040.patch ___ Python tracker <http://bugs.python.org/issue26040> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22121] IDLE should start with HOME as the initial working directory
Jeff Allen added the comment: I'm also interested in a smooth experience for beginners. I have a factual observation with respect to Terry's comment: '''Windows icons have a Shortcut tab with a Start-in field. We should like to put %USERPROFILE% there, but this does not work -- msg253393.''' ... I note that several menu shortcuts have "Start in" set to %HOMEDRIVE%%HOMEPATH%. Examples are notepad, Internet Explorer and the command prompt. (This is on Win7x64.) What we want seems to be a normal thing to do, and achieved by some, but perhaps by a post installation script. Alternatively, once a .py file exists where you want to work, right-click "Edit with IDLE" provides the CWD we'd like best. Idea: add a New >> Python File context menu item. Encourage users to create a new file that way, then open it, and everything from there is smooth. (New issue if liked.) -- nosy: +jeff.allen ___ Python tracker <http://bugs.python.org/issue22121> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24725] test_socket testFDPassEmpty fails on OS X 10.11 DP with "Cannot allocate memory"
Jeff Ramnani added the comment: I'm still getting these test failures on OS X 10.11.1. Has a radar been filed with Apple? I'd submit one, but I don't know enough about the issue to create a good bug report. In the meantime, I'm attaching a patch to skip these tests as was done in issue #12958. -- keywords: +patch nosy: +jramnani Added file: http://bugs.python.org/file41069/issue-24725.patch ___ Python tracker <http://bugs.python.org/issue24725> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23287] ctypes.util.find_library needlessly call crle on Solaris
Jeff Quast added the comment: I looked over the focus on "default" path, thank you for clarifying! Sadly, I can't help you move either of these patches forward, best wishes! -- ___ Python tracker <http://bugs.python.org/issue23287> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23287] ctypes.util.find_library needlessly call crle on Solaris
Jeff Quast added the comment: John, What do you think of the patches attached to http://bugs.python.org/issue20664 ? "crle is not needed at all because the default library path is a constant on Solaris" I don't believe this to be true, source? crle is absolutely needed to add additional library lookup paths on Solaris, did this recently change? crle is most certainly especially in regards to zones: a zone is unable to modify any of the system library paths, it wouldn't be able to install any new libraries in those given paths (/usr/lib and /lib are often shared read-only by the global zone), and crle must be used to add a library path to a writable mountpoint, such as /usr/local/lib, and often /opt and other various deviations must occur to accommodate gnu tools, etc. -- nosy: +jquast ___ Python tracker <http://bugs.python.org/issue23287> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue24280] Unable to install Python
New submission from Jeff Ding: After uninstalling old versions of Python: Python is unable to install unless I disable pip. Once python installs, python immediately crashes due to Py_Initialize -- components: Windows files: python_crash.png messages: 244003 nosy: Jeff77789, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Unable to install Python type: crash versions: Python 3.4 Added file: http://bugs.python.org/file39486/python_crash.png ___ Python tracker <http://bugs.python.org/issue24280> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23863] Fix EINTR Socket Module issues in 2.7
Jeff McNeil added the comment: Do we have a decision on this yet? I'm willing to rework bits that may need it, but I'd like to know whether this is going to be a fruitful effort or not. Thanks! -- ___ Python tracker <http://bugs.python.org/issue23863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23863] Fix EINTR Socket Module issues in 2.7
Jeff McNeil added the comment: Fixing the underlying connect call should also address EINTR failing with a "operation already in progress" when a connect+timeout fails due to a signal. I can understand not addressing EINTR generically (though it is already partially addressed in 2.7's socket.py - this just completes it), but IMO, not handling it on connect & responding with a seemingly unrelated error is the wrong thing to do. -- ___ Python tracker <http://bugs.python.org/issue23863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13472] devguide doesn’t list all build dependencies
Jeff Ramnani added the comment: So, the devguide has been updated since this issue was opened. The Quick Start section now has a link to build documentation, which includes information about build dependencies. Is this sufficient to call this bug closed? -- nosy: +jramnani ___ Python tracker <http://bugs.python.org/issue13472> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23863] Fix EINTR Socket Module issues in 2.7
Jeff McNeil added the comment: I'm not a big fan of the settimeout approach myself (and only did it because it was mentioned as a possible approach). I think the existing implementations of EINTR retry also suffer from the same issue in which the timeout isn't adjusted per iteration (but that's okay, see below). The _fileobject explicitly states only to use a blocking socket (i.e. one without a timeout set), so in practice, that shouldn't be a problem. I'd like to ensure the rest of the calls in that class take the same approach (thus the retryable call function, originally without the settimeout code) as they're a higher level abstraction above recv/send. The only other call in socket.py that also qualifies as a higher abstraction is create_connection. If we could apply the 2.7 patch you created, connect ought to be correct at that point. All that remains after that would be isolating _retryable_call to _fileobject calls -- sans the settimeout -- which requires a blocking socket anyway. In retrospect, I probably should have just placed that call in _fileobject anyway. I think that addresses most of what I'd like to fix. Of course, I'm happy to go through and weave PEP 475 into the socketmodule.c code entirely, but if the code churn creates too much worry, I think the above is a good medium. -- ___ Python tracker <http://bugs.python.org/issue23863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23863] Fix EINTR Socket Module issues in 2.7
Jeff McNeil added the comment: Actually, never mind that suggestion. Now that I think a bit more about it, that actually doesn't do anything since I'd still need to set the updated timeout on the current socket object anyway. Whoops. I'll leave it up to you as to whether we go with an approach like this as is or not. I'm happy to change the approach if there's a better one. -- ___ Python tracker <http://bugs.python.org/issue23863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23863] Fix EINTR Socket Module issues in 2.7
Jeff McNeil added the comment: Added a flag to allow for at least one run -- I know nothing of non-Linux clock resolution. That should handle that. As for the thread safety of the socket timeouts, yeah, that was why I didn't do that initially, I assumed the suggestion to take that approach took the risk into account; you'll know far more about potential impact than I will. Since this is at a higher abstraction than socket primitives, another option would be to track remaining time in thread local data so that we don't mutate the timeout on the object (which I don't really like doing anyway). Thoughts on approach before I put it together? -- Added file: http://bugs.python.org/file38944/socket_eintr.5.patch ___ Python tracker <http://bugs.python.org/issue23863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23863] Fix EINTR Socket Module issues in 2.7
Jeff McNeil added the comment: Updated to recalculate timeout at Python level. The current module already works this way on recv() calls. See attached. I'd be happy to churn through and fix the other modules (using the 3.5 work as a guide), though I think only addressing the higher level abstractions makes sense (I think that's been noted elsewhere). For example, the _fileobject wrappers, but not the recv from sock_recv_guts. -- Added file: http://bugs.python.org/file38883/socket_eintr.3.patch ___ Python tracker <http://bugs.python.org/issue23863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23863] Fix EINTR Socket Module issues in 2.7
Jeff McNeil added the comment: So, yeah, that's right. In the attached patch, I'm closing the file descriptor if the timeout/error happens on a non-blocking call. It fails with an EBADF on reconnect at that point, but it doesn't potentially leave an FD in the proc's file table. Should be no more EINTR's coming out of the select call. -- ___ Python tracker <http://bugs.python.org/issue23863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23863] Fix EINTR Socket Module issues in 2.7
Jeff McNeil added the comment: Missed check on _ex func. -- Added file: http://bugs.python.org/file38865/socket_eintr.2.patch ___ Python tracker <http://bugs.python.org/issue23863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23863] Fix EINTR Socket Module issues in 2.7
Jeff McNeil added the comment: Whoops. Accidentally attached the wrong patch that I generated during testing. -- Added file: http://bugs.python.org/file38832/socket_eintr.1.patch ___ Python tracker <http://bugs.python.org/issue23863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23863] Fix EINTR Socket Module issues in 2.7
Jeff McNeil added the comment: mcjeff@mcjeff:~/cpython_clean$ hg summary parent: 95416:fe34dfea16b0 Escaped backslashes in docstrings. branch: 2.7 commit: 3 modified, 3 unknown update: (current) -- keywords: +patch nosy: +gregory.p.smith Added file: http://bugs.python.org/file38826/socket_eintr.patch ___ Python tracker <http://bugs.python.org/issue23863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23863] Fix EINTR Socket Module issues in 2.7
New submission from Jeff McNeil: There are a collection of places in the socket module that do not correctly retry on EINTR. Updated to wrap those calls in a retry loop. However, when fixing connect calls, I noticed that when EINTR is retried on a socket with a timeout specified, the retry fails with EALREADY.. so I fixed that. I was going to shy away from primitive calls on sockets as one expects these things when working at a lower level, however, due to the way socket timeouts were implemented, I handled it differently in internal_connect. The create_connection calls probably ought to shield users from retry. Python 2.7.6. -- files: socket_intr.py messages: 240044 nosy: mcjeff priority: normal severity: normal status: open title: Fix EINTR Socket Module issues in 2.7 versions: Python 2.7 Added file: http://bugs.python.org/file38825/socket_intr.py ___ Python tracker <http://bugs.python.org/issue23863> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23599] single and double quotes stripped upon paste into interpreter
Jeff Doak added the comment: Thanks Ned and everyone! It turns out that Ned was correct and it works fine now that I followed his instructions. -- ___ Python tracker <http://bugs.python.org/issue23599> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23599] single and double quotes stripped upon paste into interpreter
Jeff Doak added the comment: I am in a standard Terminal session. I have a symbolic link for python 3.4: /usr/bin/python -> /opt/local/bin/python3.4 so I can run python... or the following: $ /opt/local/bin/python3.4 -c 'import sys;print(sys.version)' 3.4.2 (default, Oct 22 2014, 01:08:11) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.54)] $ /opt/local/bin/python3.4 -c 'import readline;print(readline.__doc__)' Importing this module enables command line editing using libedit readline. -- ___ Python tracker <http://bugs.python.org/issue23599> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23599] single and double quotes stripped upon paste into interpreter
Jeff Doak added the comment: I noticed they are smart quotes and came back to see David already mentioned it. As for Demian's question: 2.7.6: >>> print("{’Test’}") {’Test’} 3.4.2: >>> print("{Test}") {Test} It is upon paste that the quotes are lost. I'm on OSX 10.10.2 as well. -- ___ Python tracker <http://bugs.python.org/issue23599> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23599] single and double quotes stripped upon paste into interpreter
New submission from Jeff Doak: On MacBook. Copy/paste the following line into 3.4.2 interpreter session: [“Test”][‘Test’] Results in: [Test][Test] Same paste into 2.7.6 is as expected: [“Test”][‘Test’] -- components: Macintosh messages: 237389 nosy: Jeff Doak, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: single and double quotes stripped upon paste into interpreter type: behavior versions: Python 3.4 ___ Python tracker <http://bugs.python.org/issue23599> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23594] Wrong variable name in traceback
Jeff Zemla added the comment: In 3), "not" should be "now" -- ___ Python tracker <http://bugs.python.org/issue23594> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23594] Wrong variable name in traceback
New submission from Jeff Zemla: I've found a rather simple bug in the default CPython implementation on Mac OS X 10.9.5 1) Create a new .py file containing: def a(): print q x=5 2) Open Python and run using execfile() then a(). Receive error as expected: File "test.py", line 2, in a print q NameError: global name 'q' is not defined 3) Edit file so that "print q" is not "print x", and save. 4) Run a() (Do not use execfile!) 5) Error: File "test.py", line 2, in a print x NameError: global name 'q' is not defined EXPECTED: Traceback should say "print q" NOT "print x". It is reading from the file. Actually, the error in the file has been corrected-- it is the copy of the program in memory that is faulty. Python 2.7.5 (default, Mar 9 2014, 22:15:05) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin -- components: Macintosh messages: 237293 nosy: Jeff Zemla, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Wrong variable name in traceback type: compile error versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue23594> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23342] run() - unified high-level interface for subprocess
Jeff Hammel added the comment: A few observations in passing. I beg your pardon for not commenting after a more in depth study of the issue, but as someone that's written and managed several subprocess module front-ends, my general observations seem applicable. subprocess needs easier and more robust ways of managing input and output streams subprocess should have easier ways of managing input: file streams are fine, but plain strings would also be nice for string commands, shell should always be true. for list/Tupperware commands, shell should be false. in fact you'll get an error if you don't ensure this. instead, just have what is passed key execution (for windows, I have no idea. I'm lucky enough not to write windows software these days) subprocess should always terminate processes on program exit robustly (unless asked not too). I always have a hard time figuring out how to get processes to terminate, and how to have them not to. I realize POSIX is black magic, to some degree. I'm attaching a far from perfect front end that I currently use for reference -- nosy: +Jeff.Hammel Added file: http://bugs.python.org/file38075/process.py ___ Python tracker <http://bugs.python.org/issue23342> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20155] Regression test test_httpservers fails, hangs on Windows
Jeff Allen added the comment: Disabling the AV/firewall did not stop the symptoms when I was investigating originally. In order to get the unmodified test to pass, I had to stop the BFE (base filtering engine), which I think may have been given new rules or behaviours as a result of installing the AV solution ... or maybe it was a Windows upgrade that did it. I did wonder if this might be a moving target, as the test deliberately includes server abuse, while the products want to stop that. If I try test_httpservers.py as amended (http://hg.python.org/cpython/file/ffdd2d0b0049/Lib/test/test_httpservers.py) on my machine with CPython 3.4.1, I do not get the error Terry reports. (test_urlquote_decoding_in_cgi_check fails but it should.) -- ___ Python tracker <http://bugs.python.org/issue20155> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20664] _findLib_crle and _get_soname broken on latest SunOS 5.11
Jeff Quast added the comment: Submitting fix to fallback to alternate '/usr/bin/dump' path, confirmed using SmartOS. As for the issues writing to /lib and /usr/lib from a zone, and the request for "An environment variable .. to override this functionality." I have to disagree: crle(1) already provides facilities to add additional paths. For example, to add `/usr/local/lib' to your dynamic library path, You would simply run, `crle -l /usr/local/lib -u'. I don't *that* belongs in python. -- keywords: +patch nosy: +jquast Added file: http://bugs.python.org/file35362/opensolaris-ctypes-python-2.7.patch ___ Python tracker <http://bugs.python.org/issue20664> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20664] _findLib_crle and _get_soname broken on latest SunOS 5.11
Changes by Jeff Quast : Added file: http://bugs.python.org/file35363/opensolaris-ctypes-python-3.x.patch ___ Python tracker <http://bugs.python.org/issue20664> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21410] setup.py check --restructuredtext -- appears to pass if docutils not installed
New submission from Jeff Hinrichs: if you run setup.py check --restructuredtext without docutils installed, it will appear to pass if you add the -s flag, it will error and inform you that docutils is not installed. So nothing is reported and return results are the same as a "passing" check. $ python setup.py check --restructuredtext running check $ python setup.py check --restructuredtext -s running check error: The docutils package is needed. The not strict version is a little too loose to be of any good. -- components: Distutils messages: 217721 nosy: dstufft, dundeemt, eric.araujo priority: normal severity: normal status: open title: setup.py check --restructuredtext -- appears to pass if docutils not installed versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue21410> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21409] setup.py check - should fail and retrun a non 0 exit code
Jeff Hinrichs added the comment: example: (dhp)jlh@jlh-d520:~/Projects/dhp/src$ python setup.py check running check (dhp)jlh@jlh-d520:~/Projects/dhp/src$ python setup.py check --restructuredtext running check warning: check: Title underline too short. (line 2) warning: check: Could not finish the parsing. -- ___ Python tracker <http://bugs.python.org/issue21409> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21409] setup.py check - should fail and retrun a non 0 exit code
New submission from Jeff Hinrichs: python setup.py check python setup.py check --restructuredtext both incorrectly "warn" and don't "Fail" for things that will cause a failure when uploading to pypi. This is wrong. Additionally, they should return a non 0 exit code so they can be used as part of an CI such as drone.io / travis so the build will show as failing. Currently they do not, and if there are errors that will cause a pypi failure (like an unreadable long description) bad things happen. -- components: Distutils messages: 217719 nosy: dstufft, dundeemt, eric.araujo priority: normal severity: normal status: open title: setup.py check - should fail and retrun a non 0 exit code type: behavior versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 ___ Python tracker <http://bugs.python.org/issue21409> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17449] dev guide appears not to cover the benchmarking suite
Jeff Ramnani added the comment: Now that bug #18586 is closed, could the Dev Guide point benchmarkers to the benchmarks repo and its README? http://hg.python.org/benchmarks/file/9a1136898539/README.txt -- nosy: +jramnani ___ Python tracker <http://bugs.python.org/issue17449> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21060] Better error message for setup.py upload command without sdist
Jeff Ramnani added the comment: Attaching a patch with a (hopefully) more useful error message. I didn't find a good place to add this information in the "Distributing Python Modules" section of the docs, but let me know if you had a place in mind. -- keywords: +patch nosy: +jramnani Added file: http://bugs.python.org/file34898/issue21060-py35.patch ___ Python tracker <http://bugs.python.org/issue21060> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21015] support SSL_CTX_set_ecdh_auto on newer OpenSSLs
Jeff Ramnani added the comment: > Really? Apple's packaging looks almost criminal here. Apple has deprecated their bundled version of OpenSSL. This issue has more details, http://bugs.python.org/issue17128 -- nosy: +jramnani ___ Python tracker <http://bugs.python.org/issue21015> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21218] Test failure for test_ssl.test_default_ecdh_curve on OS X
New submission from Jeff Ramnani: The unittest, test_ssl.test_default_ecdh_curve, is failing on OS X (and FreeBSD 9). The test fails with the error message: """ == ERROR: test_default_ecdh_curve (test.test_ssl.ThreadedTests) -- Traceback (most recent call last): File "/Users/jramnani/code/cpython/Lib/test/test_ssl.py", line 2596, in test_default_ecdh_curve context.set_ciphers("ECDH") ssl.SSLError: ('No cipher can be selected.',) -- """ It looks to be related to issue, #21015 (changesets 3b81d1b3f9d1 and 869277faf3dc). OS Info: * Version: OS X 10.9.2 * OpenSSL version: OpenSSL 0.9.8y 5 Feb 2013 The problem looks like OpenSSL on OS X is reporting that it has ECDH when it does not. Python 3.5.0a0 (default:8cf384852680, Apr 14 2014, 13:32:46) [GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.38)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import ssl >>> ssl.HAS_ECDH True -- components: Tests messages: 216138 nosy: jramnani priority: normal severity: normal status: open title: Test failure for test_ssl.test_default_ecdh_curve on OS X versions: Python 3.4, Python 3.5 ___ Python tracker <http://bugs.python.org/issue21218> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com