Change by Charles-François Natali :
--
nosy: -neologix
___
Python tracker
<https://bugs.python.org/issue7946>
___
___
Python-bugs-list mailing list
Unsubscribe:
Change by Charles-François Natali :
--
nosy: -neologix
___
Python tracker
<https://bugs.python.org/issue12545>
___
___
Python-bugs-list mailing list
Unsubscribe:
Change by Charles-François Natali :
--
nosy: -neologix
___
Python tracker
<https://bugs.python.org/issue17263>
___
___
Python-bugs-list mailing list
Unsubscribe:
Change by Charles-François Natali :
--
nosy: -neologix
___
Python tracker
<https://bugs.python.org/issue12488>
___
___
Python-bugs-list mailing list
Unsubscribe:
Change by Charles-François Natali :
--
nosy: -neologix
___
Python tracker
<https://bugs.python.org/issue17852>
___
___
Python-bugs-list mailing list
Unsubscribe:
Change by Charles-François Natali :
--
nosy: -neologix
___
Python tracker
<https://bugs.python.org/issue22367>
___
___
Python-bugs-list mailing list
Unsubscribe:
Charles-François Natali added the comment:
I'm not convinced.
The reason is that using the number of CPU cores is just a heuristic
for a *default value*: the API allows the user to specify the number
of workers to use, so it's not really a limitation.
The problem is that if you try to think
Charles-François Natali added the comment:
The rationale for rejecting wouldn't be "DRY does not apply in this
case", it would be that this makes the code more complicated, and that
a negligible speedup would not be worth it.
Now, thanks to your benchmark, a 10% speedup is not negl
Charles-François Natali added the comment:
This refactoring was already suggested a long time ago, and at the
time both Guido and I didn't like it because it makes the code
actually more complicated: DRY in this case doesn't apply IMO.
Also, this whole thread is a repeat of:
http
Charles-François Natali added the comment:
> I don't think that selector.modify() can be a bottleneck, but IMHO the change
> is simple and safe enough to be worth it. In a network server with 10k
> client, an optimization making .modify() 1.52x faster is welcomed.
IMHO it complicates
Charles-François Natali added the comment:
Hm, do you have a realistic benchmark which would show the benefit?
Because this is really a micro-benchmark, and I'm not convinced that
Selector.modify() is a significant bottleneck in a real-world
application
Charles-François Natali added the comment:
FWIW I agree with Antoine and Martin: ignoring EBADF is a bad idea,
quite dangerous.
The man page probably says this to highlight that users shouldn't
*retry* close():
"""
Retrying the close() after a failure return is the wr
Charles-François Natali added the comment:
One reason for not calling sys.exit() is because on Linux, the default
implementation uses fork(), hence the address space in the chilren is
a clone of the parent: so all atexit handlers, for example, would be
called multiple times.
There's also
Charles-François Natali added the comment:
Shouldn't the documentation be updated?
https://docs.python.org/3.6/library/weakref.html#weakref.WeakKeyDictionary
Note Caution: Because a WeakKeyDictionary is built on top of a Python
dictionary, it must not change size when iterating over
Charles-François Natali added the comment:
The heap on Linux is still a linear contiguous *address space*. I
agree that MADV_DONTNEED allow's returning committed memory back to
the VM subsystem, but it is still using a large virtual memory area.
Not everyone runs on 64-bit, or can waste address
Charles-François Natali added the comment:
> Julian Taylor added the comment:
>
> it defaulted to 128kb ten years ago, its a dynamic threshold since ages.
Indeed, and that's what encouraged switching the allocator to use mmap.
The problem with dynamic mmap threshold is that since t
Changes by Charles-François Natali <cf.nat...@gmail.com>:
--
resolution: -> fixed
stage: patch review -> resolved
status: open -> closed
___
Python tracker <rep...@bugs.python.org>
<http://bu
Changes by Charles-François Natali <cf.nat...@gmail.com>:
--
stage: needs patch -> resolved
status: open -> closed
___
Python tracker <rep...@bugs.python.org>
<http://bugs.
Charles-François Natali added the comment:
Anyone opposed to me committing the patch I submitted?
It slves a real problem, and is fairly straight-forward (and conceptually more
correct).
--
___
Python tracker <rep...@bugs.python.org>
New submission from Charles-François Natali:
Consider this code:
-
from __future__ import print_function
from pyccp.unittest import SafeTestCase
class MyTest(SafeTestCase):
def setUp(self):
print(setUp)
def tearDown(self
Charles-François Natali added the comment:
I understand the risk of breakeage, but that's still broken, because
we break LIFO ordering.
I'd been using addCleanup() for years and never bothered looking at
the documentation - which is admitedly a mistake - because LIFO
ordering is the natural
Charles-François Natali added the comment:
It's just a doc improvement, I'm not convinced it really needs backporting.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23530
Charles-François Natali added the comment:
Committed.
Julian, thanks for the patch!
--
resolution: - fixed
stage: - resolved
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23530
Charles-François Natali added the comment:
Barring any objections, I'll commit within the next few days.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23992
Changes by Charles-François Natali cf.nat...@gmail.com:
--
keywords: +needs review
nosy: +haypo
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue24303
Changes by Charles-François Natali cf.nat...@gmail.com:
--
keywords: +needs review
nosy: +haypo
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23992
Charles-François Natali added the comment:
Here's a patch against 2.7 using _PyOS_URandom(): it should apply as-is to 3.3.
--
keywords: +patch
nosy: +neologix
versions: +Python 3.3
Added file: http://bugs.python.org/file39679/mp_sem_race.diff
New submission from Charles-François Natali:
The following segfaults in _PyObject_GenericGetAttrWithDict:
from socket import socketpair
from _multiprocessing import Connection
class Crash(Connection):
pass
_, w = socketpair()
Crash(w.fileno()).bar
#0 _PyObject_GenericGetAttrWithDict
Changes by Charles-François Natali cf.nat...@gmail.com:
Removed file: http://bugs.python.org/file39171/mp_map_fail_fast_default.diff
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23992
Changes by Charles-François Natali cf.nat...@gmail.com:
Removed file: http://bugs.python.org/file39170/mp_map_fail_fast_27.diff
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23992
Changes by Charles-François Natali cf.nat...@gmail.com:
Added file: http://bugs.python.org/file39172/mp_map_fail_fast_27.diff
Added file: http://bugs.python.org/file39173/mp_map_fail_fast_default.diff
___
Python tracker rep...@bugs.python.org
http
Charles-François Natali added the comment:
Patches for 2.7 and default.
--
keywords: +patch
Added file: http://bugs.python.org/file39170/mp_map_fail_fast_27.diff
Added file: http://bugs.python.org/file39171/mp_map_fail_fast_default.diff
___
Python
New submission from Charles-François Natali:
hanger.py
from time import sleep
def hang(i):
sleep(i)
raise ValueError(x * 1024**2)
The following code will deadlock on pool.close():
from multiprocessing import Pool
from time import sleep
from hanger import hang
with Pool
Charles-François Natali added the comment:
fstat_not_eintr.py: run this script from a NFS share and unplug the network
cable, wait, replug. Spoiler: fstat() hangs until the network is back, CTRL+c
or setitimer() don't interrupt it.
You have to mount the share with the eintr option
Charles-François Natali added the comment:
+1 from me, fstat() has always been par of POSIX.
It's really likely Python won't build anyway on such systems.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23753
Charles-François Natali added the comment:
Serhiy Storchaka added the comment:
See also issue12082.
Yes, but I don't think we want to clutter the code to support exotic
niche platforms.
--
___
Python tracker rep...@bugs.python.org
http
Charles-François Natali added the comment:
Could you regenerate your latest patch?
It doesn't show properly in the review tool.
Also, what's with the assert?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23618
Charles-François Natali added the comment:
Well, all the syscalls which can blocki can fail with EINTR, so all
I:O related one.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23648
Charles-François Natali added the comment:
LGTM.
Note that dup() cannot fail with EINTR, it is non-blocking: dup2() can
fail, because f the target FD is open, it has to close it, but not
dup().
See e.g. this man page from my Debian:
EINTR The dup2() or dup3() call was interrupted
Charles-François Natali added the comment:
As for the change to select/poll/etc, IIRC Guido was opposed to it,
that's why I didn't update them.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23646
Charles-François Natali added the comment:
If EINTR is received during connect, the socket is unusable, that's why i
didn't implement it.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23618
Charles-François Natali added the comment:
@Victor: please commit.
Would be nice to have a test for it;
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23285
Charles-François Natali added the comment:
Well, we already expose CPU affinity:
import os
os.sched_getaffinity(0)
{0}
IMO the current implementation is sufficient (and talking about
overcommitting for CPU is a bit moot if you're using virtual machine
anyways).
The current documentation
Charles-François Natali added the comment:
@neologix: Would you be ok to add a *private* _at_fork() method to selectors
classes in Python 3.4 to fix this issue?
Not really: after fork(), you're hosed anyway:
Q6 Will closing a file descriptor cause it to be removed from
all epoll
Charles-François Natali added the comment:
Does anyone have a realistic use case where modify() is actually a
non-negligible part?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18932
Charles-François Natali added the comment:
Antoine Pitrou added the comment:
Would it be possible to push the latest patch right now
It's ok for me. Please watch the buildbots :)
Cool, I'll push on Friday evening or Saturday.
--
___
Python
Charles-François Natali added the comment:
Well, I'd like to see at least one benchmark.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue18932
Charles-François Natali added the comment:
It's a kernel bug closing (working fine on my Debian wheezy with a more recent
kernel BTW).
--
resolution: - third party
status: open - closed
___
Python tracker rep...@bugs.python.org
http
Changes by Charles-François Natali cf.nat...@gmail.com:
--
status: open - pending
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23351
Charles-François Natali added the comment:
I just realized I didn't retry upon EINTR for open(): eintr-4.diff
adds this, along with tests (using a fifo).
Also, I added comments explaining why we don't retry upon close() (see
e.g. http://lwn.net/Articles/576478/ and
http://linux.derkeiler.com
Charles-François Natali added the comment:
The way socket timeouts are implemented is by using select() to determine
whether the socket is ready for read/write. In this case, select() probably
marks the socket ready even though the queue is full, which later raises
EAGAIN.
Indeed
Charles-François Natali added the comment:
With eintr-2.diff, fast!:
Victory \°/.
Instrumented test_send, 3 socket.send calls, many socket.recv_into calls:
Yep, that's expected.
I think we should keep the default socket buffer size: it increases
the test coverage, and it's probably
Charles-François Natali added the comment:
OK, it turns out the culprit was repeated calls to BytesIO.getvalue(),
which forced large allocations upon reception of every message.
The patch attached fixes this (without changing the socket buffer size).
--
Added file: http
Charles-François Natali added the comment:
eintr-1.diff doesn't seem to make any significant difference from eintr.diff
on my system. It's still pegging a CPU at 100% and takes 7 minutes wall time
to complete.
Alright, enough played: the patch attached uses a memoryview
Charles-François Natali added the comment:
I added a few prints to the send and receive loops of _test_send. When
running on a reasonably current Debian testing Linux:
Thanks, that's what I was suspecting, but I really don't understand
why 200ms isn't enough for a socket write to actually
Charles-François Natali added the comment:
Charles-François Natali added the comment:
Hmmm...
Basically, with a much smaller socket buffer, we get much more context
switches, which increases drastically the test runtime.
But I must admit I'm still really surprised by the time it takes
Charles-François Natali added the comment:
It turns out the times are not important; the hangup is the default size of
the socket buffers on OS X and possibly BSD in general. In my case, the send
and receive buffers are 8192, which explains why the chunks written are so
small.
Hmmm
Charles-François Natali added the comment:
Or we should acknowledge that this is overkill, and take the same approach as
all major web browser: disable the Nagle algorithm.
For a protocol like http which is transaction oriented it's probably the best
thing to do.
--
nosy: +neologix
Charles-François Natali added the comment:
Interestingly, there is no close() method on SimpleQueue...
--
nosy: +neologix
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23267
Charles-François Natali added the comment:
The review diff is weird: it seems it contains changes that aren't
EINTR-related (see e.g. argparse.rst).
Here's a manually generated diff.
--
Added file: http://bugs.python.org/file37802/eintr.diff
Changes by Charles-François Natali cf.nat...@gmail.com:
--
components: Library (Lib)
hgrepos: 293
nosy: haypo, neologix, pitrou
priority: normal
severity: normal
status: open
title: PEP 475 - EINTR hanndling
type: enhancement
___
Python tracker rep
Changes by Charles-François Natali cf.nat...@gmail.com:
--
keywords: +patch
Added file: http://bugs.python.org/file37797/ff1274594739.diff
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23285
New submission from Charles-François Natali:
The test runs fine on Linux, but hangs in test_send() on OS-X and *BSD.
I don't know what's wrong, so if someone with access to one of these OS could
have a look, it would be great.
--
___
Python tracker
Changes by Charles-François Natali cf.nat...@gmail.com:
--
title: PEP 475 - EINTR hanndling - PEP 475 - EINTR handling
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23285
Charles-François Natali added the comment:
RuntimeError sounds better to me (raising ValueError when no value is
provided, e.g. in select() sounds definitely strange).
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23225
Charles-François Natali added the comment:
Thanks for taking care of this.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23009
___
___
Python
Charles-François Natali added the comment:
Serhiy, I believe this still happens in Python 3.4, but it is harder to
reproduce. I couldn't get Armin's script to produce the problem either, but
I'm pretty sure that this is what causes e.g.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug
Charles-François Natali added the comment:
Maries Ionel Cristian added the comment:
Serhiy, I don't think this is a duplicate. Odd that you closed this without
any explanation.
This happens in a internal lock in cpython's runtime, while the other bug is
about locks used in the logging
Charles-François Natali added the comment:
Adding ioctl constants is fine.
However, I think that if we do this, it'd be great if we could also
expose this information in a module (since psutil inclusion was
discussed recently), but that's probably another issue
Charles-François Natali added the comment:
Annoying.
I thought CAN_RAW_FD_FRAME would be a macro, which would have made conditional
compilation easy, but it's apparently a enum value, which means we have to add
a configure-time check...
--
components: +Library (Lib) -IO
Changes by Charles-François Natali cf.nat...@gmail.com:
--
resolution: - fixed
stage: patch review - resolved
status: open - closed
title: add a fallback socketpair() implementation in test.support - add a
fallback socketpair() implementation to the socket module
Charles-François Natali added the comment:
My only comment would be to use subprocess instead of os.popen().
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue17293
Charles-François Natali added the comment:
Why is that a different issue?
The code you *add in this patch* uses os.popen, why not use subprocess instead?
Furthermore, the code catches OSError when calling popen(), but
popen() doesn't raise an exception
Charles-François Natali added the comment:
Note that I'm not fussed about it: far from simplifying the code, it
will make it more complex, thus more error-prone.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22181
Charles-François Natali added the comment:
Agreed with Antoine and Benjamin.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21963
___
___
Python
Charles-François Natali added the comment:
Let's try with this instead:
from socket import socket, SO_SNDBUF, SOL_SOCKET
s = socket()
s.getsockopt(SOL_SOCKET, SO_SNDBUF)
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org
Charles-François Natali added the comment:
In this case, the issues are being caused by the following kernel parameters
that we have for our default build -
#
## TIBCO network tuning #
#
net.core.rmem_default = 33554432
Charles-François Natali added the comment:
Patch attached.
The test wouldn't result in FD exhaustion on CPython because of the reference
counting, but should still trigger RessourceWarning.
--
keywords: +patch
nosy: +haypo, pitrou
stage: - patch review
Added file: http
Charles-François Natali added the comment:
Thanks, I committed a simpler version of the patch.
--
resolution: - fixed
stage: test needed - resolved
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22378
Charles-François Natali added the comment:
Note that even the bytes version is still quite slow. UDP is used for
light-weight protocols where you may send thousands or more messages per
second. I'd be curious what the sendto() performance is in raw C.
Ah, I wouldn't rely on the absolyte
Charles-François Natali added the comment:
Parsing a bytes object i.e. b'127.0.0.1' is done by inet_pton(), so
it's probably cheap (compared to a syscall).
If we had getaddrinfo() and gethostbyname() return bytes instead of
strings, it would be a huge gain
Charles-François Natali added the comment:
Charles-François: you get the idna overhead in 2.7, too, by specifying
u'127.0.0.1' as the address.
I don't see it in a profile output, and the timing doesn't change
whether I pass '127.0.0.1' or b'127.0.0.1' in 2.7
Charles-François Natali added the comment:
Please understand that Victor and I were asking you to pass a *unicode*
object, with a *u* prefix. For me, the time more-than-doubles, on OSX, with
the system python.
Sorry, I misread 'b'.
it's a day without
New submission from Charles-François Natali:
I noticed that socket.sendto() got noticably slower since 3.4 (at least),
compared to 2.7:
2.7:
$ ./python -m timeit -s import socket; s = socket.socket(socket.AF_INET,
socket.SOCK_DGRAM); DATA = b'hello'; TARGET=('127.0.0.1', 4242)
s.sendto(DATA
Changes by Charles-François Natali cf.nat...@gmail.com:
--
title: performance regression in socket.getsockaddr() - performance regression
in socket getsockaddrarg()
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22127
Charles-François Natali added the comment:
For Python, the encoder is only used when you pass a Unicode string.
Hm...
I'm passing ('127.0.0.1', 4242)as destination, and you can see in the
above profile that the idna encode function is called.
This doesn't occur with 2.7
Charles-François Natali added the comment:
OK, I think I see what you mean:
$ ./python -m timeit -s import socket; s =
socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.sendto(b'hello',
('127.0.0.1', 4242))1 loops, best of 3: 44.7 usec per loop
$ ./python -m timeit -s import socket; s
Charles-François Natali added the comment:
This patch should probably be moved to its own issue.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22120
Changes by Charles-François Natali cf.nat...@gmail.com:
--
Removed message: http://bugs.python.org/msg224550
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22120
Charles-François Natali added the comment:
Committed.
Sorry for the extra ~70 warnings :-)
--
resolution: - fixed
stage: - resolved
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22110
Charles-François Natali added the comment:
Antoine Pitrou added the comment:
Enabling the warnings may be a good incitation for other people to fix them ;)
That was my intention...
Can I push it, and let warnings be fixed on a case-by-case basis
Charles-François Natali added the comment:
Closing, since it's likely a kernel bug.
--
resolution: - third party
stage: - resolved
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19923
Charles-François Natali added the comment:
Closing, I haven't seen this in a while.
--
resolution: - out of date
stage: needs patch - resolved
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue15152
Charles-François Natali added the comment:
Thanks for the reminder Mark.
Yes, it is probably still an issue with the latest 2.7 release.
There were actually two issues:
- send send()/sendall() call didn't block because the test doesn't write enough
data: we have since added a SOCK_MAX_SIZE
New submission from Charles-François Natali:
The patch attached enables -Wsign-compare and -Wunreachable-code if supported
by the compiler.
AFAICT, mixed sign comparison warning is automatically enabled by Microsoft's
compiler, and is usually a good thing.
It does add some warnings though
Charles-François Natali added the comment:
Richard Oudkerk added the comment:
I can't remember why I did not use fstat() -- probably it did not occur to me.
I probably have Alzeihmer, I was sure I heard a reasonable case for
dup() vs fstat().
The only thing I can think of is that fstat() can
Charles-François Natali added the comment:
I agree with Akira, although it's probably too late now to rename.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22054
Charles-François Natali added the comment:
Backported to 2.7 (don't know how Iforgot it).
3.3 is only open for security issues, so not backporting.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue19875
Charles-François Natali added the comment:
Pipes cannot be configured in non-blocking mode on Windows. It sounds
dangerous to call a blocking syscall in a signal handler.
In fact, it works to write the signal number into a pipe on Windows, but I'm
worried about the blocking behaviour.
OK
Charles-François Natali added the comment:
In the issue #22042, I would like to make automatically the file desscriptor
or socket handler in non-blocking mode. The problem is that you cannot make a
file descriptor in non-blocking mode on Windows.
I don't think we should set it non-blocking
1 - 100 of 1823 matches
Mail list logo