Albert Zeyer added the comment:
> How is CoW copy supposed to be done by using copy_file_range() exactly?
I think copy_file_range() will just always use copy-on-write and/or
server-side-copy when available. You cannot even turn that
Albert Zeyer added the comment:
I did some further research (with all details here:
https://stackoverflow.com/a/65518879/133374).
See vfs_copy_file_range in the Linux kernel. This first tries to call
remap_file_range if possible.
FICLONE calls ioctl_file_clone. ioctl_file_clone calls
Albert Zeyer added the comment:
Is FICLONE really needed? Doesn't copy_file_range already supports the same?
I posted the same question here:
https://stackoverflow.com/questions/65492932/ficlone-vs-ficlonerange-vs-copy-file-range-for-copy-on-write-support
--
nosy: +Albert.
Albert Zeyer added the comment:
According to the man page of copy_file_range
(https://man7.org/linux/man-pages/man2/copy_file_range.2.html), copy_file_range
also should support copy-on-write:
> copy_file_range() gives filesystems an opportunity to implement
> "copy a
Albert Zeyer added the comment:
> I think it is worth pointing out that the semantics of
>
> f = ``open(fd, closefd=True)``
>
> are broken (IMHO) because an exception can result in an unreferenced file
> object that has taken over reponsibility for closing the fd, but
Albert Zeyer added the comment:
If you anyway accept that KeyboardInterrupt can potentially leak, by just using
`except Exception`, it would also be solved here.
--
___
Python tracker
<https://bugs.python.org/issue39
Albert Zeyer added the comment:
Why is `except BaseException` better than `except Exception` here? With `except
Exception`, you will never run into the problem of possibly closing the fd
twice. This is the main important thing which we want to fix here. This is more
important than missing
Albert Zeyer added the comment:
Instead of `except:` and `except BaseException:`, I think better use `except
Exception:`.
For further discussion and reference, also see the discussion here:
https://news.ycombinator.com/item?id=22028581
--
nosy: +Albert.Zeyer
Albert Zeyer added the comment:
Note that this indeed seems confusing. I just found this thread by search for a
null context manager. Because I found that in TensorFlow they introduced
_NullContextmanager in their code and I wondered that this is not provided by
the Python stdlib
Albert Zeyer added the comment:
I'm also affected by this, with Python 3.6. My home directory is on a
ZFS-backed NFS share.
See here for details:
https://github.com/Linuxbrew/homebrew-core/issues/4799
Basically:
Copying setuptools.egg-info to
/u/zeyer/.linuxbrew/lib/python3.6/site-pac
Albert Zeyer added the comment:
Here is some more background for a case where this occurs:
https://stackoverflow.com/questions/46849566/multi-threaded-openblas-and-spawning-subprocesses
My proposal here would fix this.
--
___
Python tracker
<ht
Albert Zeyer added the comment:
This is a related issue, although with different argumentation:
https://bugs.python.org/issue20104
--
___
Python tracker
<https://bugs.python.org/issue31
New submission from Albert Zeyer :
subprocess_fork_exec currently calls fork().
I propose to use vfork() or posix_spawn() or syscall(SYS_clone, SIGCHLD, 0)
instead if possible and if there is no preexec_fn. The difference would be that
fork() will call any atfork handlers (registered via
New submission from Albert Zeyer:
The doc says that StringIO.truncate should not change the current position.
Consider this code:
try:
import StringIO
except ImportError:
import io as StringIO
buf = StringIO.StringIO()
assert_equal(buf.getvalue(), "")
prin
Albert Zeyer added the comment:
Yes exactly. Sorry if I was unclear before. :)
I mentioned SIGUSR1/2 because for a segfault, the signal handler will usually
be executed in the same thread (although I'm not sure if that is guaranteed),
so that was usually not a problem. But I used SIGUSR1
Albert Zeyer added the comment:
PyGILState_GetThisThreadState might not be the same Python thread as
_PyThreadState_Current, even in the case that both are not NULL. That is
because SIGUSR1/2 will get delivered to any running thread. In the output by
faulthandler, I want that it marks the
Albert Zeyer added the comment:
Any update here?
--
nosy: +Albert.Zeyer
versions: +Python 2.7
___
Python tracker
<http://bugs.python.org/issue5710>
___
___
Pytho
Albert Zeyer added the comment:
Ah thanks, that explains why it failed for me, and why it works after my fix,
which was anyway what I intended.
I mostly posted my comment here in case someone else hits this, so he has
another thing to check/debug.
I don't think that there is a b
Albert Zeyer added the comment:
Note that there are still people who get this error in some strange cases, me
included.
E.g.:
http://stackoverflow.com/questions/27904936/python-exe-file-crashes-while-launching-on-windows-xp/32137554#32137554
This happened at a call to `os.urandom` for me
New submission from Albert Zeyer:
Code:
class C(object):
def __init__(self, a, b=2, c=3):
pass
class D(C):
def __init__(self, d, **kwargs):
super(D, self).__init__(**kwargs)
class E(D):
def __init__(self, **kwargs):
super
New submission from Albert Zeyer:
SIGUSR1/2 will get delivered to any running thread. The current thread of the
signal doesn't give any useful information. Try to get the current Python
thread which holds the GIL instead, or use NULL.
I have patched this for the external faulthandler m
New submission from Albert Zeyer:
The documentation about Py_Finalize() about freeing objects is not exactly
clear.
Esp., when I have called Py_INCREF somewhere, those objects will always have
ob_refcnt > 0 unless I call Py_DECREF somewhere, what about these objects? Will
they be freed
New submission from Albert Zeyer:
On MacOSX, when you build an ARC-enabled Dylib with backward compatibility for
e.g. MacOSX 10.6, some unresolved functions like
`_objc_retainAutoreleaseReturnValue` might end up in your Dylib.
Some reference about the issue:
1. http://stackoverflow.com/q
Albert Zeyer added the comment:
Thanks a lot for the long and detailed response! I didn't meant to start a
header war; I thought that my request was misunderstood and thus the header
changes were by mistake. But I guess it is a good suggestion to leave that
decision to a core dev.
I
Albert Zeyer added the comment:
I don't know that I have an expression and I want it also to work if it is not
an expression. Basically I really want the 'single' behavior. (My
not-so-uncommon use case: Have an interactive shell where the output on stdout
does not make sens
Albert Zeyer added the comment:
Btw., this turns out to be at least 4 kind of separate bugs:
1. The crash from the testcase - when the interpreter shuts down.
2. Maybe the crash from my musicplayer app - if that is a different one. But
very related to the first one.
3. Many loops over the
Albert Zeyer added the comment:
> > Wouldn't it be better to expose and re-use the HEAD_LOCK and HEAD_UNLOCK
> > macros from pystate.c?
> I don't like holding locks before calling "alien" code, it's a recipe
> for deadlocks: for example, if another th
Albert Zeyer added the comment:
> Wouldn't it be better to expose and re-use the HEAD_LOCK and HEAD_UNLOCK
> macros from pystate.c?
The macro-names HEAD_LOCK/HEAD_UNLOCK irritates me a bit. Protecting only the
head would not be enough. Any tstate object could be invalidated. But ac
Albert Zeyer added the comment:
Btw., where we are at this issue - I have seen many more loops over the threads
(via PyThreadState_Next). I have a bad feeling that many of these loops have
similar issues.
In this case, I am also not sure anymore that it really was a problem. I
originally
Albert Zeyer added the comment:
The symbols are there because it is a library which exports all the symbols.
Other debugging information are not there and I don't know any place where I
can get them.
It currently cannot work on Linux in the same way because the GUI is Cocoa only
righ
New submission from Albert Zeyer:
`compile(s, "", "single")` would generate a code object which
prints the value of the evaluated string if that is an expression. This is what
you would normally want in a REPL.
Instead of printing the value, it might make more sense to ret
Albert Zeyer added the comment:
Sadly, that is quite complicated or almost impossible. It needs the MacOSX
system Python and that one lacks debugging information.
I just tried with the CPython vom hg-2.7. But it seems the official Python
doesn't have objc bindings (and I also need
Albert Zeyer added the comment:
Here is one. Others are in the issue report on GitHub.
In Thread 5, the PyObject_SetAttr is where some attribute containing a
threading.local object is set to None. This threading.local object had a
reference to a sqlite connection object (in some TLS contextes
Albert Zeyer added the comment:
Note that in my original application where I encountered this (with sqlite),
the backtrace looks slightly different. It is at shutdown, but not at
interpreter shutdown - the main thread is still running.
https://github.com/albertz/music-player/issues/23
I was
Albert Zeyer added the comment:
The backtrace:
Thread 0:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x7fff8a54e386 __semwait_signal + 10
1 libsystem_c.dylib 0x7fff85e30800 nanosleep + 163
2 libsystem_c.dylib
Albert Zeyer added the comment:
The latest 2.7 hg still crashes.
--
___
Python tracker
<http://bugs.python.org/issue17263>
___
___
Python-bugs-list mailin
New submission from Albert Zeyer:
If you have some Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS in some tp_dealloc
and you use such objects in thread local storage, you might get crashes,
depending on which thread at what time is trying to cleanup such object.
I haven't fully figured ou
Albert Zeyer added the comment:
I don't quite understand. Shouldn't __getattr__ also work in old-style classes?
And the error itself ('staticmethod' object is not callable), shouldn't that be
impossible?
--
New submission from Albert Zeyer:
Code:
```
class Wrapper:
@staticmethod
def __getattr__(item):
return repr(item) # dummy
a = Wrapper()
print(a.foo)
```
Expected output: 'foo'
Actual output with Python 2.7:
Traceback (most recent call las
New submission from Albert Zeyer :
```
class Foo1(dict):
def __getattr__(self, key): return self[key]
def __setattr__(self, key, value): self[key] = value
class Foo2(dict):
__getattr__ = dict.__getitem__
__setattr__ = dict.__setitem__
o1 = Foo1()
o1.x = 42
print(o1, o1.x)
o2
Albert Zeyer added the comment:
You might have opened several via `openpty`.
I am doing that here: https://github.com/albertz/PyTerminal
--
___
Python tracker
<http://bugs.python.org/issue12
Albert Zeyer added the comment:
Even more problematic: The readline lib itself is absolutely not designed in a
way to be used from multi threads at once.
--
___
Python tracker
<http://bugs.python.org/issue12
New submission from Albert Zeyer :
PyOS_StdioReadline from Parser/myreadline.c is printing the prompt on stderr.
I think it should print it on the given parameter sys_stdout. Other readline
implementations (like from the readline module) also behave this way.
Even if it really is supposed to
Albert Zeyer added the comment:
Ok, it seems that the Modules/readline.c implementation is also not really
threadsafe... (Whereby, I think it should be.)
--
___
Python tracker
<http://bugs.python.org/issue12
New submission from Albert Zeyer :
In Parser/myreadline.c PyOS_Readline uses a single lock (`_PyOS_ReadlineLock`).
I guess it is so that we don't have messed up stdin reads. Or are there other
technical reasons?
However, it should work to call this function from multiple threads
Albert Zeyer added the comment:
Whoops, sorry, invalid. It doesn't need to. It is handled in PyOS_Readline.
--
resolution: -> invalid
status: open -> closed
___
Python tracker
<http://bugs.python.
New submission from Albert Zeyer :
Modules/readline.c 's `call_readline` doesn't release the GIL while reading.
--
messages: 143226
nosy: Albert.Zeyer
priority: normal
severity: normal
status: open
title: readline implementation doesn't release the GIL
versi
New submission from Albert Zeyer :
In Parser/tokenizer.c, there is `PyOS_Readline(stdin, stdout, tok->prompt)`.
This ignores any `sys.stdin` / `sys.stdout` overwrites.
The usage should be like in Python/bltinmodule.c in builtin_raw_input.
--
messages: 143168
nosy: Albert.Ze
Albert Zeyer added the comment:
Simplified code:
```
from ast import *
globalsDict = {}
exprAst = Interactive(body=[
FunctionDef(
name=u'foo',
args=arguments(args=[], vararg=None, kwarg=None, defaults=[]),
New submission from Albert Zeyer :
Code:
```
from ast import *
globalsDict = {}
body = [
Assign(targets=[Name(id=u'argc', ctx=Store())],
value=Name(id=u'None', ctx=Load())),
]
exprAst = Interactive(body=[
FunctionDef(
New submission from Albert Zeyer :
Code:
```
from ast import *
globalsDict = {}
exprAst = Interactive(body=[FunctionDef(name=u'Py_Main',
args=arguments(args=[Name(id=u'argc', ctx=Param()), Name(id=u'argv',
ctx=Param())], vararg=None, kwarg=None, defaults=[]),
Albert Zeyer added the comment:
PyPy bug report: https://bugs.pypy.org/issue806
--
___
Python tracker
<http://bugs.python.org/issue12608>
___
___
Python-bug
New submission from Albert Zeyer :
Code:
```
import ast
globalsDict = {}
fAst = ast.FunctionDef(
name="foo",
args=ast.arguments(
args=[], vararg=None, kwarg=None, defaults=[],
kwonlyargs=[], kw_defaults=[]),
body=[], deco
Albert Zeyer added the comment:
Whoops, looks like a duplicate of #1469629.
--
resolution: -> duplicate
status: open -> closed
___
Python tracker
<http://bugs.python.org/i
New submission from Albert Zeyer :
The attached Python script leaks memory. It is clear that there is a reference
circle (`__dict__` references `self`) but `gc.collect()` should find this.
--
components: Interpreter Core
files: py_dict_refcount_test.py
messages: 138062
nosy
55 matches
Mail list logo