[issue37701] shutil.copyfile raises SpecialFileError for symlink to fifo
Christopher Hunt added the comment: > I expect it to fail if follow_symlinks is True, which is the default value. I > expect it to succeed with follow_symlinks=False, which should create a > shallow copy of just the symlink, regardless of its target. I agree, thanks for the correction. -- ___ Python tracker <https://bugs.python.org/issue37701> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35727] sys.exit() in a multiprocessing.Process does not align with Python behavior
Christopher Hunt added the comment: Any other concerns here? -- ___ Python tracker <https://bugs.python.org/issue35727> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35727] sys.exit() in a multiprocessing.Process does not align with Python behavior
Christopher Hunt added the comment: > I believe the mentality behind multiprocessing.Process triggering an exit > code of 1 when sys.exit() is invoked inside its process is to indicate a > non-standard exit out of its execution. Can I ask what this is based on? I did a pretty good amount of digging but didn't find any justification for it. It just seems like a simple oversight to me. > There may yet be other side effects that could be triggered by having a > sys.exit(0) translate into an exit code of 0 from the Process's process -- > and we might not notice them with the current tests. This is definitely a behavior change and will break any code that currently relies on `sys.exit(None)` or `sys.exit()` exiting with a non-zero exit code from a multiprocessing.Process. The fact that all documentation indicates that `sys.exit(None)` or `sys.exit()` results in a 0 exit code in normal Python (with no documentation on it related to multiprocessing) makes me think that any code relying on this behavior is subtly broken, however. Any impacted user can update their code and explicitly pass 1 to `sys.exit`, which should be forward and backwards compatible. > Was there a particular use case that motivates this suggested change? I have a wrapper library that invokes arbitrary user code and attempts to behave as if that code was executed in a vanilla Python process, to include propagating the correct exit code. Currently I have a workaround here: https://github.com/chrahunt/quicken/blob/2dd00a5f024d7b114b211aad8a2618ec8f101956/quicken/_internal/server.py#L344-L353, but it would be nice to get rid of it in 5-6 years if this fix gets in and the non-conformant Python versions fall out of support. :) -- ___ Python tracker <https://bugs.python.org/issue35727> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37701] shutil.copyfile raises SpecialFileError for symlink to fifo
Christopher Hunt added the comment: Likewise when the destination is a symlink - though in that case the value of `follow_symlinks` should probably not matter. -- ___ Python tracker <https://bugs.python.org/issue37701> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37701] shutil.copyfile raises SpecialFileError for symlink to fifo
New submission from Christopher Hunt : Currently shutil.copyfile raises SpecialFileError when src is a link to a fifo. To reproduce: import os import shutil import tempfile d = tempfile.mkdtemp() fifo = os.path.join(d, 'fifo') link_to_fifo = os.path.join(d, 'link-to-fifo') copy_of_link_to_fifo = os.path.join(d, 'copy-of-link-to-fifo') os.mkfifo(fifo) os.symlink(fifo, link_to_fifo) shutil.copyfile(link_to_fifo, copy_of_link_to_fifo) Example output: Traceback (most recent call last): File "repro.py", line 14, in shutil.copyfile(link_to_fifo, copy_of_link_to_fifo) File "/home/chris/.pyenv/versions/3.7.2/lib/python3.7/shutil.py", line 115, in copyfile raise SpecialFileError("`%s` is a named pipe" % fn) shutil.SpecialFileError: `/tmp/user/1000/tmpxhigll5g/link-to-fifo` is a named pipe I would have expected this to copy the symlink without complaint. Raising a SpecialFileError would be OK if `follow_symlinks` was False. -- components: Library (Lib) messages: 348597 nosy: chrahunt priority: normal severity: normal status: open title: shutil.copyfile raises SpecialFileError for symlink to fifo type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8 ___ Python tracker <https://bugs.python.org/issue37701> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37700] shutil.copyfile does not raise SpecialFileError for socket files
Christopher Hunt added the comment: See also: the comment from https://github.com/python/cpython/blob/e1b900247227dad49d8231f1d028872412230ab4/Lib/shutil.py#L245: > # XXX What about other special files? (sockets, devices...) -- ___ Python tracker <https://bugs.python.org/issue37700> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37700] shutil.copyfile does not raise SpecialFileError for socket files
New submission from Christopher Hunt : Currently shutil.copyfile only raises SpecialFileError for named pipes. When trying to use the function to copy a socket file, the exception raised depends on the platform, for example: macOS: "[Errno 102] Operation not supported on socket: '/Users/guido/src/mypy/dmypy.sock'" HP-UX: "[Errno 223] Operation not supported: 'example/foo'" Solaris: "[Errno 122] Operation not supported on transport endpoint: 'example/foo'" AIX: "[Errno 64] Operation not supported on socket: '../../example/foo'" Linux: "[Errno 6] No such device or address: 'example/foo'" This can be reproduced like: import os import shutil import socket import tempfile d = tempfile.mkdtemp() src = os.path.join(d, "src") dest = os.path.join(d, "dest") sock = socket.socket(socket.AF_UNIX) sock.bind(src) shutil.copyfile(src, dest) Making shutil.copyfile raise SpecialFileError for socket files would improve the interface of this function since the same class of error could be ignored. This is mostly useful with shutil.copytree, which defaults to copyfile for its copy function. -- components: Library (Lib) messages: 348595 nosy: chrahunt priority: normal severity: normal status: open title: shutil.copyfile does not raise SpecialFileError for socket files type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8 ___ Python tracker <https://bugs.python.org/issue37700> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32082] atexit module: allow getting/setting list of handlers directly
Christopher Hunt added the comment: Updated link to workaround referenced in the original issue: https://github.com/sagemath/sage/blob/b5c9cf037cbce672101725f269470135b9b2c5c4/src/sage/cpython/atexit.pyx -- nosy: +chrahunt ___ Python tracker <https://bugs.python.org/issue32082> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35792] Specifying AbstractEventLoop.run_in_executor as a coroutine conflicts with implementation/intent
Christopher Hunt added the comment: For impl.1: > (very breaking change) should be > (very breaking change, mitigated some by the fact that the implementation > will warn about the unawaited future) -- ___ Python tracker <https://bugs.python.org/issue35792> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35792] Specifying AbstractEventLoop.run_in_executor as a coroutine conflicts with implementation/intent
Christopher Hunt added the comment: My use case is scheduling work against an executor but waiting on the results later (on demand). If converting `BaseEventLoop.run_in_executor(executor, func, *args)` to a coroutine function, I believe there are two possible approaches (the discussion that started this [here](https://stackoverflow.com/questions/54263558/is-asyncio-run-in-executor-specified-ambiguously) only considers [impl.1]): impl.1) `BaseEventLoop.run_in_executor` still returns a future, but we must await the coroutine object in order to get it (very breaking change), or impl.2) `BaseEventLoop.run_in_executor` awaits on the result of `func` itself and returns the result directly In both cases the provided `func` will only be dispatched to `executor` when the coroutine object is scheduled with the event loop. For [impl.1], from the linked discussion, there is an example of user code required to get the behavior of schedule immediately and return future while still using `BaseEventLoop.run_in_executor`: async def run_now(f, *args): loop = asyncio.get_event_loop() started = asyncio.Event() def wrapped_f(): loop.call_soon_threadsafe(started.set) return f(*args) fut = loop.run_in_executor(None, wrapped_f) await started.wait() return fut however this wrapper would only be possible to use in an async function and assumes the executor is running in the same process - synchronous functions (e.g. an implementation of Protocol.data_received) would need to use an alternative `my_run_in_executor`: def my_run_in_executor(executor, f, *args, loop=asyncio.get_running_loop()): return asyncio.wrap_future(executor.submit(f, *args), loop=loop) either of these would need to be discovered by users and live in their code base. Having to use `my_run_in_executor` would be most unfortunate, given the purpose of `run_in_executor` per the PEP is to be a shorthand for this exact function. For [impl.2], we are fine if the use case allows submitting and awaiting the completion of `func` in the same location, and no methods of asyncio.Future (e.g. `add_done_callback`, `cancel`) are used. If not then we still need to either: soln.1) use `my_run_in_executor`, or soln.2) wrap the `BaseEventLoop.run_in_executor` coroutine object/asyncio.Future with `asyncio.ensure_future` [soln.1] is bad for the reason stated above: this is the function we are trying to avoid users having to write. [soln.2] uses the low-level function `asyncio.ensure_future` because both of the suggested alternatives (per the docs) `asyncio.create_task` and `BaseEventLoop.create_task` throw a `TypeError` when provided an `asyncio.Future` as returned by the current implementation of `BaseEventLoop.run_in_executor`. This will have to be discovered by users and exist in their code base. -- ___ Python tracker <https://bugs.python.org/issue35792> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35960] dataclasses.field does not preserve empty metadata object
Change by Christopher Hunt : -- keywords: +patch, patch, patch pull_requests: +11831, 11832, 11833 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35960> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35960] dataclasses.field does not preserve empty metadata object
Change by Christopher Hunt : -- keywords: +patch, patch pull_requests: +11831, 11832 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35960> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35960] dataclasses.field does not preserve empty metadata object
Change by Christopher Hunt : -- keywords: +patch pull_requests: +11831 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35960> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35960] dataclasses.field does not preserve empty metadata object
New submission from Christopher Hunt : The metadata argument to dataclasses.field is not preserved in the resulting Field.metadata attribute if the argument is a mapping with length 0. The docs for dataclasses.field state: > metadata: This can be a mapping or None. None is treated as an empty dict. > This value is wrapped in MappingProxyType() to make it read-only, and exposed > on the Field object. The docs for MappingProxyType() state: > Read-only proxy of a mapping. It provides a dynamic view on the mapping’s > entries, which means that when the mapping changes, the view reflects these > changes. I assumed that the mapping provided could be updated after class initialization and the changes would reflect in the field's metadata attribute. Indeed this is the case when the mapping is non-empty, but not when the mapping is initially empty. For example: $ python Python 3.8.0a1+ (heads/master:9db56fb8fa, Feb 10 2019, 19:54:10) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from dataclasses import field >>> d = {} >>> v = field(metadata=d) >>> d['i'] = 1 >>> v.metadata mappingproxy({}) >>> v = field(metadata=d) >>> v.metadata mappingproxy({'i': 1}) >>> d['j'] = 2 >>> v.metadata mappingproxy({'i': 1, 'j': 2}) In my case I have a LazyDict into which I was trying to save partial(callback, field). I could not have the field before it was initialized so I tried: d = {} member: T = field(metadata=d) d['key'] = partial(callback, field) and it failed same as above. As a workaround, one can set a dummy value in the mapping prior to calling dataclasses.field and then remove/overwrite it afterwards. -- components: Library (Lib) messages: 335184 nosy: chrahunt priority: normal severity: normal status: open title: dataclasses.field does not preserve empty metadata object type: behavior versions: Python 3.7, Python 3.8 ___ Python tracker <https://bugs.python.org/issue35960> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27035] Cannot set exit code in atexit callback
Change by Christopher Hunt : -- nosy: +chrahunt versions: +Python 3.7 ___ Python tracker <https://bugs.python.org/issue27035> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35792] Specifying AbstractEventLoop.run_in_executor as a coroutine conflicts with implementation/intent
New submission from Christopher Hunt : Currently AbstractEventLoop.run_in_executor is specified as a coroutine, while BaseEventLoop.run_in_executor is actually a non-coroutine that returns a Future object. The behavior of BaseEventLoop.run_in_executor would be significantly different if changed to align with the interface . If run_in_executor is a coroutine then the provided func will not actually be scheduled until the coroutine is awaited, which conflicts with the statement in PEP 3156 that it "is equivalent to `wrap_future(executor.submit(callback, *args))`". There has already been an attempt in bpo-32327 to convert this function to a coroutine. We should change the interface specified in `AbstractEventLoop` to indicate that `run_in_executor` is not a coroutine, which should help ensure it does not get changed in the future without full consideration of the impacts. -- components: asyncio messages: 334109 nosy: asvetlov, chrahunt, yselivanov priority: normal severity: normal status: open title: Specifying AbstractEventLoop.run_in_executor as a coroutine conflicts with implementation/intent type: behavior versions: Python 3.7 ___ Python tracker <https://bugs.python.org/issue35792> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35727] sys.exit() in a multiprocessing.Process does not align with Python behavior
Change by Christopher Hunt : -- pull_requests: +11143, 11144 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35727> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35727] sys.exit() in a multiprocessing.Process does not align with Python behavior
Change by Christopher Hunt : -- pull_requests: +11143, 11144, 11145 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35727> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35727] sys.exit() in a multiprocessing.Process does not align with Python behavior
Change by Christopher Hunt : -- pull_requests: +11143 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue35727> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35727] sys.exit() in a multiprocessing.Process does not align with Python behavior
Change by Christopher Hunt : -- versions: -Python 2.7, Python 3.4, Python 3.5, Python 3.6 ___ Python tracker <https://bugs.python.org/issue35727> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35727] sys.exit() in a multiprocessing.Process does not align with Python behavior
New submission from Christopher Hunt : When a function is executed by a multiprocessing.Process and uses sys.exit, the actual exit code reported by multiprocessing is different than would be expected given the Python interpreter behavior and documentation. For example, given: from functools import partial from multiprocessing import get_context import sys def run(ctx, fn): p = ctx.Process(target=fn) p.start() p.join() return p.exitcode if __name__ == '__main__': ctx = get_context('fork') print(run(ctx, partial(sys.exit, 2))) print(run(ctx, partial(sys.exit, None))) print(run(ctx, sys.exit)) ctx = get_context('spawn') print(run(ctx, partial(sys.exit, 2))) print(run(ctx, partial(sys.exit, None))) print(run(ctx, sys.exit)) ctx = get_context('forkserver') print(run(ctx, partial(sys.exit, 2))) print(run(ctx, partial(sys.exit, None))) print(run(ctx, sys.exit)) when executed results in $ python exit.py 2 1 1 2 1 1 2 1 1 but when Python itself is executed we see different behavior $ for arg in 2 None ''; do python -c "import sys; sys.exit($arg)"; echo $?; done 2 0 0 The documentation states > sys.exit([arg]) > ... > The optional argument arg can be an integer giving the exit status > (defaulting to zero), or another type of object. The relevant line in multiprocessing (https://github.com/python/cpython/blame/1cffd0eed313011c0c2bb071c8affeb4a7ed05c7/Lib/multiprocessing/process.py#L307) seems to be from the original pyprocessing module itself, and I could not locate an active site that maintains the repository to see if there was any justification for the behavior. -- components: Library (Lib) files: multiprocessing-exitcode-3.7.1.patch keywords: patch messages: 333531 nosy: chrahunt priority: normal severity: normal status: open title: sys.exit() in a multiprocessing.Process does not align with Python behavior type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48045/multiprocessing-exitcode-3.7.1.patch ___ Python tracker <https://bugs.python.org/issue35727> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com