[issue18677] Enhanced context managers with ContextManagerExit and None
Kristján Valur Jónsson added the comment: Having given this some thougt, years laters, I believe it _is_ possible to write nested() (and nested_delayed()) in a correct way in python, without the ContextManagerExit function. Behold! import contextlib @contextlib.contextmanager def nested_delayed(*callables): """ Instantiate and invoke context managers in a nested way. each argument is a callable which returns an instantiated context manager """ if len(callables) > 1: with nested_delayed(*callables[:-1]) as a, callables[-1]() as b: yield a + (b,) elif len(callables) == 1: with callables[0]() as a: yield (a,) else: yield () def nested(*managers): """ Invoke preinstantiated context managers in a nested way """ def helper(m): """ A helper that returns the preinstantiated context manager when invoked """ def callable(): return m return callable return nested_delayed(*(helper(m) for m in managers)) @contextlib.contextmanager def ca(): print("a") yield 1 class cb: def __init__(self): print ("instantiating b") def __enter__(self): print ("b") return 2 def __exit__(*args): pass @contextlib.contextmanager def cc(): print("c") yield 3 combo = nested(ca(), cb(), cc()) combo2 = nested_delayed(ca, cb, cc) with combo as a: print("nested", a) with combo2 as a: print("nested_delayed", a) with ca() as a, cb() as b, cc() as c: print ("syntax", (a, b, c)) -- ___ Python tracker <https://bugs.python.org/issue18677> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18677] Enhanced context managers with ContextManagerExit and None
Kristján Valur Jónsson added the comment: Great throwback. As far as I know, context managers are still not first class citizens. You cannot _compose_ two context managers into a new one programmatically in the language, in the same way that you can, for instance, compose two functions. Not even using "eval()" is this possible. This means that the choice of context manager, or context managers, to be used, has to be known when writing the program. You cannot pass an assembled context manager in as an argument, or otherwise use a "dynamic" context manager at run time, unless you decide to use only a fixed number of nested ones. any composition of context managers becomes syntax _at the point of invocation_. The restriction is similar to not allowing composition of functions, i.e. having to write `fa(fb(fc()))` at the point of invocation and not have the capability of doing ``` def fd(): return fa(fb(fc)) ... fd() ``` I think my "ContextManagerExit" exception provided an elegant solution to the problem and opened up new and exciting possibilities for context managers and how to use them. But this here note is just a lament. I've stopped contributing to core python years ago, because it became more of an excercise in lobbying than anything else. Cheers! -- ___ Python tracker <https://bugs.python.org/issue18677> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16487] Allow ssl certificates to be specified from memory rather than files.
Kristján Valur Jónsson added the comment: I gave up contributing a long time ago now because it was too emotionally exhausting to me. This issue was one that helped tip the scales. I hope things have become easier now because good projects like Python need the enthusiasm and spirit of volunteer contributors. Good luck. -- ___ Python tracker <https://bugs.python.org/issue16487> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17639] symlinking .py files creates unexpected sys.path
Kristján Valur Jónsson added the comment: So you have already stated, and this issue is six years old now. While I no longer have a stake in this, I'd just like to reiterate that IMHO it breaks several good practices of architecture, particularly that of separation of roles. The abstraction called symbolic links is the domain of the filesystem. An application should accept the image that the filesystem offers, not try to second-guess the intent of an operator by arbitrarily, and unexpectedly, unrolling that abstraction. While you present a use case, I argue that it isn't, and shouldn't be, the domain of the application to intervene in an essentially shell specific, and operator specific process of collecting his favorite shortcuts in a folder. For that particular use case, a more sensible way would be for the user to simply create shell shortcuts, even aliases, for his favorite python scripts. This behaviour is basically taking over what should be the role of the shell. I'm unable to think of another program doing this sort of thin. I suppose that now, with the reworked startup process, it would be simpler to actually document this rather unexpected behaviour, and possibly provide a flag to override it. I know that I some spent time on this and came away rather stumped. -- ___ Python tracker <https://bugs.python.org/issue17639> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14307] Make subclassing SocketServer simpler for non-blocking frameworks
Kristján Valur Jónsson added the comment: Nice necro :) Socketserver is already subclassable and overridable for so many things. Hard to understand the reluctancy to _allow_ for a different way to handle accept timeouts. But this is also why I stopped contributing to core, because it turned out to be more about lobbying than anything else. Anyway this is already implemented in PythonClassic, so no need for me to push it upstream :) -- ___ Python tracker <https://bugs.python.org/issue14307> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] Race in PyThread_release_lock - can lead to memory corruption and deadlock
Kristján Valur Jónsson added the comment: imho posix made a mistake in allowing signal/broadcast outside the mutex. Otherwise an implementation could rely on the mutex for internal state manipulation. I have my own fast condition variable lib implemented using semaphores and it is simple to do if one requires the mutex to be held for the signal event. Condition variables semantics are otherwise quite brilliant. For example, allowing for spurious wakeups to occur allows, again, for much simpler implementation. -- ___ Python tracker <https://bugs.python.org/issue38106> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8410] Fix emulated lock to be 'fair'
Kristján Valur Jónsson added the comment: super, good catch! -- ___ Python tracker <https://bugs.python.org/issue8410> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38106] Race in PyThread_release_lock - can lead to memory corruption and deadlock
Kristján Valur Jónsson added the comment: Interesting. Yet another reason to always do condition signalling with the lock held, such as is good practice to avoid race conditions. That's the whole point of condition variables. -- ___ Python tracker <https://bugs.python.org/issue38106> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue36402] threading._shutdown() race condition: test_threading test_threads_join_2() fails randomly
Kristján Valur Jónsson added the comment: Please note that this fix appears to be the cause of #37788 -- nosy: +kristjan.jonsson ___ Python tracker <https://bugs.python.org/issue36402> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34659] Inconsistency between functools.reduce & itertools.accumulate
Kristján Valur Jónsson added the comment: I think I'll pass Raymond, its been so long since I've contributed, in the mean time there is github and argument clinic and whatnot so I'm out of training. I´m lurking around these parts and maybe shall return one day :) -- ___ Python tracker <https://bugs.python.org/issue34659> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34573] Simplify __reduce__() of set and dict iterators.
Kristján Valur Jónsson added the comment: Interesting, I'll have a look when I'm back from vacation. On Tue, 4 Sep 2018, 07:04 Raymond Hettinger, wrote: > > Raymond Hettinger added the comment: > > Also take a look at the other places that have similar logic. I believe > these all went in at the same time. See commit > 31668b8f7a3efc7b17511bb08525b28e8ff5f23a > > -- > nosy: +kristjan.jonsson, rhettinger > > ___ > Python tracker > <https://bugs.python.org/issue34573> > ___ > -- ___ Python tracker <https://bugs.python.org/issue34573> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9141] Allow objects to decide if they can be collected by GC
Kristján Valur Jónsson added the comment: Hi there! By the time pep 442 was introduced, I wasn't very active in python core stuff anymore, and still am not. The intent of this patch, which is explained (IMHO) quite clearly in the first few comments was to - Formalize a way for custom objects to tell GC "No, please don't delete my references _at this time_ because if you do, I will have to run non-trivial code that may wreak havoc". This is different from just having a __del__ method. Sometimes deleting is okay. Sometimes not. - Make this way available to all objects, not just Generator objects. We already identified a separate such instance in stackless python and it seemed prudent to "give back" the generalization that we made there for the benefit of python at large. - Not introduce new slots for this purpose. Now, with pep 442, I have no idea how Generators can postpone being garbage collection since I'm honestly not familiar with how things work now. I have no particular skin in this game anymore, I'm no longer actively working on Stackless or Python integrations and I stopped trying to push stuff thought the bugtracker to preserve my sanity. So, lets just close this until the day in the future when needs arise once more :) -- resolution: -> out of date stage: -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue9141> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16487] Allow ssl certificates to be specified from memory rather than files.
Kristján Valur Jónsson <swesk...@gmail.com> added the comment: OP here, lurking. The need to load server certificates from memory is quite real. Some seven years ago I wrote custom code to handle that for CCPs python branch, and contributed patches to that effect. It's always dismaying to see how peoples efforts get bogged down by one thing or another. So, now there is a pep that prohibits this change? Fun times. 2017-11-30 12:03 GMT+00:00 Christian Heimes <rep...@bugs.python.org>: > > Christian Heimes <li...@cheimes.de> added the comment: > > I'm working on a PEP that builds on top of PEP 543 and addresses some > issues like IDNA #28414, OpenSSL/LibreSSL compatibility, hostname > verification, verification chain, and TLS 1.3. As part of the PEP > implementation, I'll add a certificate class. > > I don't to introduce yet another way to load a certificate. The C code is > already complicated enough. > > -- > > ___ > Python tracker <rep...@bugs.python.org> > <https://bugs.python.org/issue16487> > ___ > -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue16487> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30703] Non-reentrant signal handler (test_multiprocessing_forkserver hangs)
Kristján Valur Jónsson added the comment: Thanks for the mention, @pitrou. CCP was using Py_AddPendingCall but not from signal handlers, but external threads. Also on windows only. You'll also be happy to know that I have left CCP and the Eve codebase is being kept stable while regularly adding security patches from the 2.7 codebase, as far as I know :) -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30703> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30727] [2.7] test_threading.ConditionTests.test_notify() hangs randomly on Python 2.7
Kristján Valur Jónsson added the comment: My favorite topic :) You could use threading.Barrier() which is designed to synchronize N threads doing this kind of lock-step processing. The problem is that the Barrier() is implemented using Condition variables, so for unit-testing, Condition Variables, this presents a conundrum... -- nosy: +kristjan.jonsson ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30727> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29897] itertools.chain behaves strangly when copied with copy.copy
Kristján Valur Jónsson added the comment: It is a tricky issue. How deep do you go?what if you are chaining several of the itertools? Seems like we're entering a semantic sinkhole here. Deepcopy would be too deep... The original copy support in these objects stems from the desire to support pickling. On 1 Apr 2017 16:12, "Raymond Hettinger" <rep...@bugs.python.org> wrote: > > Raymond Hettinger added the comment: > > Serhiy, feel free to take this in whatever direction you think is best. > > -- > assignee: -> serhiy.storchaka > > ___ > Python tracker <rep...@bugs.python.org> > <http://bugs.python.org/issue29897> > ___ > -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29897> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29871] Enable optimized locks on Windows
Kristján Valur Jónsson added the comment: Hi there. Looking at the API docs today (https://msdn.microsoft.com/en-us/library/windows/desktop/ms686304(v=vs.85).aspx) it appears that the timeout case is documented. I'm fairly sure that it wasn't when I implemented it. There was a good reason for the "2" return code. The idea was: Either there was an error (-1) or we woke up. Since there are spurious wakeups and stolen wakeups, the predicate must be tested again anyway. The '2' return code would mean that the timeout condition should be tested by looking at some external clock. Now, the api documentation is bad. Return value is BOOL. Nonzero means "success" (whatever that means) Failure means ´zero´ Timeout means FALSE and GetLastError() == ERROR_TIMEOUT If memory serves, FALSE == 0 on windows Anyway, I've been out of this part of the code for sufficient time for details to be blurry. My advise: 1) Check that the API of the function is indeed correct. 2) If there is no bulletproof way of distinguishing timeout from normal return, just consider all returns normal (remember, non-error return means that we woke up, not that _we_ were signaled) 2) Verify that the code that is failing can indeed support spurious wakeups/stolen wakeups. It used to be that the python condition variables didn't have this property, because of the way they were implemented. This may have made people lazy. K -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29871> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13721] ssl.wrap_socket on a connected but failed connection succeeds and .peer_certificate gives AttributeError
Kristján Valur Jónsson added the comment: fyi, I just observed this in the field in 2.7.3 using requests 2.5.3 I don't think requests has a workaround for 2.7 from reading the release logs. -- nosy: +kristjan.jonsson ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue13721> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8800] add threading.RWLock
Kristján Valur Jónsson added the comment: Seems to have fizzled out due to the intense amount of bikeshedding required. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue8800> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27682] Windows Error 10053, ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine
Kristján Valur Jónsson added the comment: As already stated, this error bubbles up from the TCP layer. It means that the tcp stack, for example, gave up resending a tcp frame and timed out, determining that the recipient was no longer listening. You cannot create this error yourself. If you, for example, call s.shutdown(SHUT_WR), you get a WSAESHUTDOWN error. If the connection is closed (via s.close()) you get a EBADF error. Now, the interaction with the client may cause the client to misbehave, but this sort of error is usually either due to the network (other host becomes unreachable) or misconfiguration of the local host's tcp stack. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27682> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27682] Windows Error 10053, ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine
Kristján Valur Jónsson added the comment: This error is a protocol error. It is the analog to WSAECONNRESET. ECONNRESET occurs when the local host receives a RST packet from the peer, usually because the peer closed the connection. WSAECONNABORT occurs when the local tcp layer decides that the connection is dead, (it may have sent RST to the peer itself). This can occur for various reasons. Often because the client has cone away, closed the connection or other things. It is best to treat WSACONNRESET as WSACONNABORT, i.e., there was a TCP protocol error and the transaction (http request) probably wasn't completed completely by both parties. See also here: https://www.chilkatsoft.com/p/p_299.asp In your case, I would expect a problem with the client uploading the file. It probably closes the connection after sending the data without waiting for the http response. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27682> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27406] subprocess.Popen() hangs in multi-threaded code
New submission from Kristján Valur Jónsson: On a quad-core raspberrypi, i have experienced that subprocess.Popen() sometimes does not return immediatelly, but only much later, when an unrelated process has exited. Debugging the issue, I find the parent process hanging in # Wait for exec to fail or succeed; possibly raising exception # Exception limited to 1M data = _eintr_retry_call(os.read, errpipe_read, 1048576) This behaviour is consistent with the problem described in pipe_cloexec(): def pipe_cloexec(self): """Create a pipe with FDs set CLOEXEC.""" # Pipes' FDs are set CLOEXEC by default because we don't want them # to be inherited by other subprocesses: the CLOEXEC flag is removed # from the child's FDs by _dup2(), between fork() and exec(). # This is not atomic: we would need the pipe2() syscall for that. r, w = os.pipe() self._set_cloexec_flag(r) self._set_cloexec_flag(w) return r, w In short: It appears that occasionally the pipe FD is leaked to a different subprocess (started on a separate thread) before the cloexec flags can be set. This causes the parent process to wait until that other instance of the file descriptor is closed, i.e. when that other unrelated process exits. I currently have a workaround which involves using a threading.Lock() around the call. This is not very nice, however. Also, there is #Issue12196 which could be backported to 2.7 to address this problem. -- components: Interpreter Core messages: 269432 nosy: kristjan.jonsson priority: normal severity: normal status: open title: subprocess.Popen() hangs in multi-threaded code type: behavior versions: Python 2.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue27406> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26739] idle: Errno 10035 a non-blocking socket operation could not be completed immediately
Kristján Valur Jónsson added the comment: Hi there, everyone. I'm sorry for my rash remarks about the state of IDLE, I'm sure it is alive and well and its good to see that fine people like Terry are working on keeping it up to date. Michael, please understand that python developers are volunteers and sometimes need help to fix things. In this case, we have not been able to reproduce the problem, and are not sure what can be causing it. My suggestion for you to modify code would be a step in identifying and diagnosing the problem. Without such feedback it is hard to accomplish anything. Cheers! -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26739> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26739] idle: Errno 10035 a non-blocking socket operation could not be completed immediately
Kristján Valur Jónsson added the comment: I think that the select.select calls there are a red herring, since I see no evidence that the rpc socket is ever put in non-blocking mode. But the line self.rpcclt.listening_sock.settimeout(10) indicates that the socket is in timeout mode, and so, the error could be expected if it weren't for the backported fix for issue #9090 I'll have another look at that code and see if thera are any loopholes. Also, Micahel could try commenting out this line in C:\Python27\Lib\idlelib\PyShell.py: self.rpcclt.listening_sock.settimeout(10) and see if the problem goes away. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26739> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26739] idle: Errno 10035 a non-blocking socket operation could not be completed immediately
Kristján Valur Jónsson added the comment: Caveat emptor: I know nothing of IDLE, and I even suspect it to be dead or dying code. Non the less, it could be patched. I found this in the code: def putmessage(self, message): self.debug("putmessage:%d:" % message[0]) try: s = pickle.dumps(message) except pickle.PicklingError: print >>sys.__stderr__, "Cannot pickle:", repr(message) raise s = struct.pack("<i", len(s)) + s while len(s) > 0: try: r, w, x = select.select([], [self.sock], []) n = self.sock.send(s[:BUFSIZE]) except (AttributeError, TypeError): raise IOError, "socket no longer exists" except socket.error: raise else: s = s[n:] If the socket were non-blocking, this would be the place to add a handler to catch socket.error with errno=errno.EWOULDBLOCK However, I can't see that this socket is non-blocking. Perhaps I have some blindness, but the select calls seem to be redundant to me, I can't see any sock.setblocking(False) or sock.settimeout(0.0) being done anywhere. Having said that, the following change can be made (which is the prudent way to use select/send anyway) while len(s) > 0: try: while True: r, w, x = select.select([], [self.sock], []) try: n = self.sock.send(s[:BUFSIZE]) break except socket.error as e: import errno # should be done at the top if e.errno != errno.EWOULDBLOCK: raise except (AttributeError, TypeError): raise IOError, "socket no longer exists" except socket.error: raise else: s = s[n:] -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26739> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26739] idle: Errno 10035 a non-blocking socket operation could not be completed immediately
Kristján Valur Jónsson added the comment: Hi there. I don't think this is in relation to issue #9090. That one had to do with the internal mechanisms of doing blocking IO with timeout. this is done internally by using non-blocking sockets and select(), and the backport dealt with some edge cases on windows where select() could falsely indicate that data were ready. >From what I can see in this error description, we are dealing with real >non-blocking IO, i.e. an application is using select and non-blocking sockets. It is possible that this windows edge case is now being elevated into the application code and whatever select() logic being used in rpc.py needs to be aware of it, or that for some reason this socket is supposed to be blocking, but isn't. I'll have a quick look at idlelib and see if I can see anything. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26739> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25718] itertools.accumulate __reduce__/__setstate__ bug
Kristján Valur Jónsson added the comment: This could be fixed by saving the accumulate state in a tuple. It would break protocol, though. I don't recall the rules for backwards compatibility of pickles. I've argued before that the state of runtime structures such as generators is so intimately tied with the currently executing python that we shouldn't worry to much about it. This sort of stuff is used for caches, IPC, and so on. But it's not my call. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25718> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15068] fileinput requires two EOF when reading stdin
Changes by Kristján Valur Jónsson <swesk...@gmail.com>: -- nosy: -kristjan.jonsson ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue15068> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25021] product_setstate() Out-of-bounds Read
Changes by Kristján Valur Jónsson <swesk...@gmail.com>: -- stage: -> resolved ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25021> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25021] product_setstate() Out-of-bounds Read
Kristján Valur Jónsson added the comment: Thanks, I'll get this committed and merged asap. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25021> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25021] product_setstate() Out-of-bounds Read
Kristján Valur Jónsson added the comment: Interesting. Let me have a look. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25021> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25021] product_setstate() Out-of-bounds Read
Kristján Valur Jónsson added the comment: An alternative patch. Please test this since I don't have a development system. -- keywords: +needs review Added file: http://bugs.python.org/file40404/itertoolsmodule.c.patch ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25021> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25021] product_setstate() Out-of-bounds Read
Kristján Valur Jónsson added the comment: There are two problems with the previous patch: 1) it can put out of bounds values into lz->indices. This can cause problems then next time product_next() is called. 2) the case of a pool having zero size is not dealt with (it wasn't before either). My patch should deal with both cases, but please verify since I don't have access to a python dev system at the moment. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25021> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23344] Faster marshalling
Kristján Valur Jónsson added the comment: looks good to me, although it has been pointed out that marshal _write_ speed is less critical than read speed :) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue23344 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22113] memoryview and struct.pack_into
Kristján Valur Jónsson added the comment: lgtm :) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue22113 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Fix error handler of _PyString_Resize() on allocation failure
Kristján Valur Jónsson added the comment: Nope, closing as fixed :) -- resolution: - fixed status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14534] Add method to mark unittest.TestCases as do not run.
Kristján Valur Jónsson added the comment: Just want to restate my +1 for Michael's idea. I'm hit by this all the time and it is beautiful and expressive. It also does not preclude the annoying mix-in way of doing it. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14534 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20737] 3.3 _thread lock.acquire() timeout and threading.Event().wait() do not wake for certain values on Windows
Kristján Valur Jónsson added the comment: Thanks. Can you confirm that it resolves the issue? I'll get it checked in once I get the regrtest suite run. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20737 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20737] 3.3 _thread lock.acquire() timeout and threading.Event().wait() do not wake for certain values on Windows
Kristján Valur Jónsson added the comment: I see, I wasn't able to compile it yesterday when I did it :) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20737 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20737] 3.3 _thread lock.acquire() timeout and threading.Event().wait() do not wake for certain values on Windows
Kristján Valur Jónsson added the comment: nope, let's not do that :) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20737 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20737] 3.3 _thread lock.acquire() timeout and threading.Event().wait() do not wake for certain values on Windows
Changes by Kristján Valur Jónsson krist...@ccpgames.com: -- resolution: - fixed status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20737 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20737] 3.3 _thread lock.acquire() timeout and threading.Event().wait() do not wake for certain values on Windows
Kristján Valur Jónsson added the comment: changing long to DWORD doesn't really fix the overflow issue. The fundamental problem is that some of the apis, e.g. WaitForSingleObject have a DWORD maximum. so, we cannot support sleep times longer than some particular time. Microseconds was chosen in the api because that is the resolution of the api in pthreads. IMHO, I think it is okay to have an implicit ceiling on the timeout, e.g. some 4000 seconds. We can add a caveat somewhere that anyone intending to sleep for extended periods of time should be prepared for a timeout occurring early, and should have his own timing logic to deal with that. My suggestion then is to a) change the apis to DWORD b) add a macro something like PyCOND_MAX_WAIT set to 2^32-1 c) properly clip the argument where we call this cunfion, e.g. in lock.acquire. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20737 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20737] 3.3 _thread lock.acquire() timeout and threading.Event().wait() do not wake for certain values on Windows
Kristján Valur Jónsson added the comment: Tim, how about changing the variable to unsigned long? I'd like the signature of the function to be the same for all platforms. This will change the code and allow waits for up to 4000 seconds. There is still an overflow problem present, though.+ David, in general the maximum wait times of these primitives are platform specific. If you don't want any ceiling, then we would have to add code all over the place (in C) to do looping timeouts. Not sure which is better, to do it in c, or to accept in python that waits may timeout earlier than specified. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20737 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20737] 3.3 _thread lock.acquire() timeout and threading.Event().wait() do not wake for certain values on Windows
Kristján Valur Jónsson added the comment: Hi there. When I said 4000, that was because of the conversion to microseconds which happens early on. I'm not trying to be difficult here Tim, it's just that you've pointed out a problem and I'd like us to have a comprehensive fix. unsigned long, I realized, is also not super, because on unix that can be either 32 or 64 bits :) The reason 24 hour waits work on 2.7 is that the conversion to microseconds is never done, rather it uses a DWORD of milliseconds. I agree that this is a regression that needs fixing. Even if there is a theroetical maximum, it should be higher than that :) My latest suggestion? Let's just go ahead and use a double for the argument in PyCOND_TIMEDWAIT(). We then have two conversion cases: 1) to a DWORD of milliseconds for both windows apis. Here we should truncate to the max size of a DWORD 2) to the timeval used on pthreads. for 1, that can be done like: if (ds*1e3 (double)DWORD_MAX) ms = DWORD_MAX; else ms = (DWORD)(ds*1e3) for 2, modifying the PyCOND_ADD_MICROSECONDS macro into something like: #define PyCOND_ADD_MICROSECONDS(tv, ds) do { long oldsec, sec, usec; assert(ds = 0.0); // truncate ds into theoretical maximum if (ds (double)long_max) ds = (double)long_max; // whatever that may be sec = (long)ds; usec = (long)((ds - (double)sec) * 1e6)) oldsec = tv.tv_sec; tv.tv_usec += usec; tv.tv_sec += sec; if (usec = 100) { tv.tv_usec -= 100; tv.tv_sec += 1; } if (tv.tv_sec oldsec) /* detect overflow */ tv.sec = max_long; I'm not super experienced with integer arithmetic like this or the pitfalls of overflow, so this might need some pondering. Perhaps it is better to do the tv_sec and tv_usec arithmetic in doubles before converting them back. Does this sound ok? Let me see if I can cook up an alternative patch. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20737 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20737] 3.3 _thread lock.acquire() timeout and threading.Event().wait() do not wake for certain values on Windows
Kristján Valur Jónsson added the comment: (cont.) so, I suggest that we modify the API to use Py_LONG_LONG usec Does that sound reasonable? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20737 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20737] 3.3 _thread lock.acquire() timeout and threading.Event().wait() do not wake for certain values on Windows
Kristján Valur Jónsson added the comment: Ah, I saw this code here in thread_nt.h: if ((DWORD) milliseconds != milliseconds) Py_FatalError(Timeout too large for a DWORD, please check PY_TIMEOUT_MAX); the PyCOND_TIMEDWAIT is currently only used by the GIL code and by the locks on NT. The GIL code assumes microsecond resolution. So we need to stick to that, at least. But the locking code assumes at least DWORD worth of milliseconds. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20737 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20737] 3.3 _thread lock.acquire() timeout and threading.Event().wait() do not wake for certain values on Windows
Kristján Valur Jónsson added the comment: Here is a proposed alternative patch. No additional checks, just a wider Py_LONG_LONG us wide enough to accommodate 32 bits of milliseconds as before. -- Added file: http://bugs.python.org/file35175/condwait.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20737 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20737] 3.3 _thread lock.acquire() timeout and threading.Event().wait() do not wake for certain values on Windows
Kristján Valur Jónsson added the comment: fix patch, was using git format -- Added file: http://bugs.python.org/file35176/condwait.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20737 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7105] weak dict iterators are fragile because of unpredictable GC runs
Changes by Kristján Valur Jónsson krist...@ccpgames.com: -- resolution: - fixed status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7105 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21220] Enhance obmalloc allocation strategy
Kristján Valur Jónsson added the comment: This significantly helps fragmentation in programs with dynamic memory usage, e.g. long running programs. On which programs? The fragmentation of the memory depends a lot on how the program allocates memory. For example, if a program has no temporary memory peak, it should not be a victim of the memory fragmentation. Long running programs. E.g. web servers and so on. Where there is a churn in memory usage. As objects are allocated and released with some sort of churn. New objects will be allocated from lower-address pages, where higher address pages are increasingly likely to be freed as no-longer used objects are released. This is the second best thing to a sliding-block allocator and is motivated by the same requirements that makes such an sliding-block allocator (such as pypy uses) desirable in the first place. To measure the improvment of such memory allocator, more benchmarks (speed and fragmentation) should be run than a single test (memcruch.py included in the test) written to benchmark the allocator. Yes. Memcrunch was specifically written to simulate a case where objects are continuously created and released, such as might be expected in a server, with a peak in memory usage followed by lower memory usage, and to demonstrate that the pages allocated during the peak will be released later as memory churn causes memory usage to migrate toward lower addresses. However, following Antoine's advice I ran the Benchmarks testsuite and found an adverse effect in the n-body benchmark. That can have two causes: a) the insertion cost of the block when a block moves from 'full' to 'used'. This is a rare event and should be unimportant. I will instrument this for this test and see if it is really the reason b) Cache effects because a newly 'used' block is not immediately allocated from. Again, it happens rarely that a block is linked at the head so this shouldn't be significant. Because of this, this change isn't yet ready to be applied. If however I manage to change the policy so that memory usage becomes better while still preserving performance of the benchmark tests, I will report back :) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21220 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Fix error handler of _PyString_Resize() on allocation failure
Kristján Valur Jónsson added the comment: Add comments and explicit (void) on the ignored value from _PyString_Resize as suggested by Victor -- Added file: http://bugs.python.org/file34951/string_resize.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21220] Enhance obmalloc allocation strategy
Kristján Valur Jónsson added the comment: Antoine: The location of the arenas when they're individually allocated with mmap does not matter, no, but preferring to keep low address ones reduces vmem fragmentation, since they end up being clustered together in memory. For the usable-arenas list, there is no extra O(n) because they were ordered anyway. the effect of ARENA_STRATEGY is minor, but it helps for it to be consistent with POOL_STRATEGY. The real win however is with POOL_STRATEGY. Fragmentation is dramatically reduced. This is demonstrated with the tools/scripts/memcrunch.py which you can use to experiment with it. Performance e.g. of unittests also goes up. The fact that there is a new O(n) sort operation when a pool becomes 'used' does not seem to matter for that. Victor: I've tested using windows LFH many times before, the python obmalloc generally is much faster than that. Annoying :). It is actually a very good allocator. The innovation here is the lowest address strategy which I have never seen before (it might be known, but then I'm not a CS) but is one that I have experimented with for often in the past. It is suprisingly effective. When there is memory churn, memory usage tends to migrate towards low addresses and free up memory. Go ahead, try the scripts and see what happens. The proof is in the pudding :) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21220 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21220] Enhance obmalloc allocation strategy
Kristján Valur Jónsson added the comment: sorry, I meant of course performance of pybench.py goes up -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21220 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21220] Enhance obmalloc allocation strategy
Kristján Valur Jónsson added the comment: Sure. I'm flying home from PyCon this afternoon. I´ll produce and tabulate data once I'm home on my workstation again. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21220 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21220] Enhance obmalloc allocation strategy
Kristján Valur Jónsson added the comment: Update patch with suggestions from Larry -- Added file: http://bugs.python.org/file34876/obmalloc.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21220 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Process crashes if not enough memory to import module
Kristján Valur Jónsson added the comment: I would also advocate for a better api, that leaves it up to the caller what to do, much like realloc() does. A convenience macro that frees the block on error could then be provided. But this is 2.7 and we don't change stuff there :) Can you elaborate on your second comment? Is there some place where I forgot to clear the object? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Process crashes if not enough memory to import module
Kristján Valur Jónsson added the comment: This is _PyString_Resize(). I don't immediatlly see an error case where the string isn't freed: int _PyString_Resize(PyObject **pv, Py_ssize_t newsize) { register PyObject *v; register PyStringObject *sv; v = *pv; if (!PyString_Check(v) || Py_REFCNT(v) != 1 || newsize 0 || PyString_CHECK_INTERNED(v)) { *pv = 0; Py_DECREF(v); PyErr_BadInternalCall(); return -1; } /* XXX UNREF/NEWREF interface should be more symmetrical */ _Py_DEC_REFTOTAL; _Py_ForgetReference(v); *pv = (PyObject *) PyObject_REALLOC((char *)v, PyStringObject_SIZE + newsize); if (*pv == NULL) { PyObject_Del(v); PyErr_NoMemory(); return -1; } _Py_NewReference(*pv); sv = (PyStringObject *) *pv; Py_SIZE(sv) = newsize; sv-ob_sval[newsize] = '\0'; sv-ob_shash = -1; /* invalidate cached hash value */ return 0; } -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Process crashes if not enough memory to import module
Kristján Valur Jónsson added the comment: Ok, are we good to go then? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21220] Enhance obmalloc allocation strategy
New submission from Kristján Valur Jónsson: A new allocation policy, lowest address strategy improves fragmentation of memory in obmalloc. pools with available memory are chosen by lowest address preference. This increases the likelihood that unused pools are released to their corresponding arenas. Arenas with available pools are similarly chosen by lowest address. This significantly helps fragmentation in programs with dynamic memory usage, e.g. long running programs. Initial tests also indicate some minor performance benefits of the pybench, probably due to better cache behaviour. -- components: Interpreter Core files: obmalloc.patch keywords: patch messages: 216156 nosy: kristjan.jonsson priority: normal severity: normal status: open title: Enhance obmalloc allocation strategy type: resource usage versions: Python 3.5 Added file: http://bugs.python.org/file34836/obmalloc.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21220 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Process crashes if not enough memory to import module
Kristján Valur Jónsson added the comment: Sure, there was at least one case in the patch, where the string resize was consider optional, and the code tried to recover if it didn't succeed. But I don't think we should be trying to change apis, even internal ones in python 2.7. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21220] Enhance obmalloc allocation strategy
Changes by Kristján Valur Jónsson krist...@ccpgames.com: -- nosy: +larry ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue21220 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Process crashes if not enough memory to import module
Kristján Valur Jónsson added the comment: Could someone please review this patch? I'd like to see it committed asap. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Process crashes if not enough memory to import module
Kristján Valur Jónsson added the comment: Ok, retrying without the --git flag (I thought that was recommended, it was once...) -- Added file: http://bugs.python.org/file34784/string_resize.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Process crashes if not enough memory to import module
Changes by Kristján Valur Jónsson krist...@ccpgames.com: Removed file: http://bugs.python.org/file34779/string_resize.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Process crashes if not enough memory to import module
Kristján Valur Jónsson added the comment: Here we are. There were a lot of places where this was being incorrectly done. And some places where this was being considered a recoverable error, which it isn't because the source is freed. Which sort of supports my opinion that this is bad general api design, but perhaps a good conveneice function for limited use. -- Added file: http://bugs.python.org/file34779/string_resize.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17522] Add api PyGILState_Check
Changes by Kristján Valur Jónsson krist...@ccpgames.com: -- status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17522 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16475] Support object instancing and recursion in marshal
Changes by Kristján Valur Jónsson krist...@ccpgames.com: -- status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16475 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17969] multiprocessing crash on exit
Kristján Valur Jónsson added the comment: Closing this as won-t fix. Exiting with running threads is a can of worms. -- status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17969 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8410] Fix emulated lock to be 'fair'
Kristján Valur Jónsson added the comment: Closing this issue. It is largely superseded. For our Python 2.7 branches, we have a custom GIL lock which can have different inherent semantics from the common Lock. In particular, we can implement a fair PyGIL_Handoff() function to be used to yield the GIL to a waiting thread. -- status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8410 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8410] Fix emulated lock to be 'fair'
Changes by Kristján Valur Jónsson krist...@ccpgames.com: -- resolution: - rejected ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8410 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17969] multiprocessing crash on exit
Changes by Kristján Valur Jónsson krist...@ccpgames.com: -- resolution: - wont fix ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17969 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17522] Add api PyGILState_Check
Changes by Kristján Valur Jónsson krist...@ccpgames.com: -- resolution: - fixed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17522 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16475] Support object instancing and recursion in marshal
Changes by Kristján Valur Jónsson krist...@ccpgames.com: -- resolution: - fixed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16475 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue15139] Speed up threading.Condition wakeup
Kristján Valur Jónsson added the comment: In our 2.7 branches, this approach has been superseded with a natively impolemented _Condition class. This is even more efficient. It is available if the underlying Lock implementation is based on pthread locks (not semaphores). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue15139 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19009] Enhance HTTPResponse.readline() performance
Kristján Valur Jónsson added the comment: Sure. If there are issues we'll just reopen. Closing. -- resolution: - fixed status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19009 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20440] Use Py_REPLACE/Py_XREPLACE macros
Kristján Valur Jónsson added the comment: Are you referring to the Py_LOCAL_INLINE macro? I see that we have no Py_INLINE. Py_LOCAL_INLINE includes the static qualifier, and in fact, if there is no USE_INLINE defined, then all that it does is to add static. Would having a Py_INLINE(type) macro, that is the same, but without the static (except when USE_INLINE is false) make a difference? It would be a bit odd to have Py_LOCAL_INLINE() functions defined in the headers. I'm not sure that there is any practical difference between static inline and inline. But there is a difference between static and inline. It would be great if we could start writing stuff like the Py_INCREF() and Py_DECREF() as functions rather than macros, but for this to happen we must be able to trust that they are really inlined. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20440 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20440] Use Py_REPLACE/Py_XREPLACE macros
Kristján Valur Jónsson added the comment: Well, Larry, I certainly am in no mood to start wrangling on python-dev. A 25 year old C standard is likely to be very mature and reliable by now. Why take risks? :) #Py_LOCAL_INLINE exists and demonstrates that we can make use of them when possible. We could create a #Py_INLINE macro that would work the same, only not necessarily yield inline on some of the older compilers. It would really be healthy for the pyton code base, for quality, for semantics, and for the learning curve, if we could start to rely less on macros in the core. Ah well, perhaps I'll throw this out there... -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20440 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20440] Use Py_REPLACE/Py_XREPLACE macros
Kristján Valur Jónsson added the comment: Barring c++, are we using any C compilers that don't support inlines? Imho these macros should be functions proper. Then we could do Py_Assign(target, Py_IncRef(obj)) It's 2014 already. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20440 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20440] Use Py_REPLACE/Py_XREPLACE macros
Kristján Valur Jónsson added the comment: Better yet, embrace c++ and smart pointers :;-) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20440 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20440] Use Py_REPLACE/Py_XREPLACE macros
Kristján Valur Jónsson added the comment: These macros work as assignment with builtin decref, i.e. a smart replacement for = We could resolve this by calling them Py_ASSIGN Py_XASSIGN and having complementary macros Py_STORE/Py_XSTORE that will incref the new value. However, with an added incref, does the X apply to the source or the target? I wonder if we need the X variants in these macros. Once you are doing things like this, why not just use X implicitly? An extra pointer test or two is unlikely to be a performance problem in the places you might use them. Anyway, I'll be adding this to the internal api of stackless because it is tremendously useful. -- nosy: +kristjan.jonsson ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20440 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14911] generator.throw() documentation inaccurate
Kristján Valur Jónsson added the comment: Note that the docstring does not match the doc: PyDoc_STRVAR(throw_doc, throw(typ[,val[,tb]]) - raise exception in generator,\n\ return next yielded value or raise StopIteration.); Should I change the docstring too? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14911 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14911] generator.throw() documentation inaccurate
Kristján Valur Jónsson added the comment: Here's one for 2.7. I'm still looking at 3. The funny thing is that the signature of generator.throw reflects 2.x conventions. I'm figuring out if it can be used with the .with_traceback() idiom -- keywords: +patch Added file: http://bugs.python.org/file33886/throw27.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14911 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14911] generator.throw() documentation inaccurate
Kristján Valur Jónsson added the comment: And 3.x -- Added file: http://bugs.python.org/file33888/3x.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14911 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Process crashes if not enough memory to import module
Kristján Valur Jónsson added the comment: These are very unusual semantics. The convention in the python api is that functions are refernece-invariant when there are errors. i.e. if a function fails or not does not change the caller's reference passing assumptions. For example, Py_BuildValue(N, myobject); takes care to always steal the reference of myobject, even when Py_BuildValue fails. Thi tehe case of _PyBytes_Resize(), the caller owns the (single) reference to the operand, and owns the reference to it (or a new one) on success. It is highly unusual that the case of failure causes it to no longer own this reference. Python 3 should have taken the opportunity to remove remove this unusual inheritance from _PyString_Resize() -- nosy: +kristjan.jonsson ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20434] Process crashes if not enough memory to import module
Kristján Valur Jónsson added the comment: I'm not talking about the PyObject** argument, Victor. I'm talking about reference counting semantics. It is a rule that reference counting semantics should be the same over a function call whether that function raised an exception or not. The this function effectively steals a reference in case of error. The caller owns the reference to the argument (passed by ref) if it succeeds, but if id doesn't, then he doesn't own it anymore. Reference counting invariance with errors is, as I mentioned, observed with e.g. the 'N' argument to Py_BuildValue(), which is defined to steal a reference and does so even if the call fails. This behaviour is observed by other reference-stealing functions, such as PyTuple_SetItem(). Similarly, functions who don't steal a reference, i.e. take their own, will not change that behaviour if they error. If you don't want to think about this in terms of reference counting semantics, think about it in terms of the fact that in case of error, most functions leave everything as it was. PyList_Append(), if it fails, leaves everything as it was. This function does not. In case of failure, it will, as a convenience to the caller, release the original object. It is equivalent to () realloc() freeing its operand if it cannot succeed. It is precisely these 'unusual' exceptions from established semantics that cause these kind of programming errors. Originally, this was probably designed as a convenience to the programmer for the handful of places that the function (_PyString_Resize) was used. But this decision forces every new user of this function (and its descendents) to be acutely aware of its unusual error behaviour. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20434 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7464] circular reference in HTTPResponse by urllib2
Kristján Valur Jónsson added the comment: No, the socket is actually closed when response's close() method is called. The problem is that the HTTPResponse object, buried deep within the nested classes returned from do_open(), has a circular reference, and _it_ will not go away. No one is _relying_ on garbage collection in the sense that this is not, I think, designed behaviour, merely an unintentional effect of storing a bound method in the object inance. As always, circular reference should be avoided when possible since relying on gc is not something to be done lightly. Now, I think that changing the complicated wrapping at this stage is not possible, but merely replacing the bound method with a weak method might just do the trick. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7464 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7464] circular reference in HTTPResponse by urllib2
Kristján Valur Jónsson added the comment: Here it is. Notice the incredible nesting depth in python 2.7. The socket itself is found at response.fp._sock.fp._sock There are two socket._fileobjects in use! -- Added file: http://bugs.python.org/file33205/httpleak.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7464 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7464] circular reference in HTTPResponse by urllib2
Kristján Valur Jónsson added the comment: This is still a horrible, horrible, cludge. I've recently done some work in this area and will suggest a different approach. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7464 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7105] weak dict iterators are fragile because of unpredictable GC runs
Kristján Valur Jónsson added the comment: That's the spirit, Guido :) I just think people are being extra careful after the regression introduced in 2.7.5. However, IMHO we must never let the odd mistake scare us from making necessary moves. Unless Antoine explicitly objects, I think I'll submit my patch from november and we'll just watch what happens. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7105 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19787] tracemalloc: set_reentrant() should not have to call PyThread_delete_key()
Kristján Valur Jónsson added the comment: +1 Why don't we just fix this and see where the chips fall? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19787 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7105] weak dict iterators are fragile because of unpredictable GC runs
Kristján Valur Jónsson added the comment: Strictly speaking b) is not a semantic change. Depending on your semantic definition of semantics. At any rate it is even less so than a) since the temporary list is hidden from view and the only side effect is additional memory usage. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7105 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7105] weak dict iterators are fragile because of unpredictable GC runs
Kristján Valur Jónsson added the comment: d), We could also simply issue a (documentation) warning, that the iterator methods of these dictionares are known to be fragile, and recommend that people use the keys(), values() and items() instead. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7105 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7105] weak dict iterators are fragile because of unpredictable GC runs
Kristján Valur Jónsson added the comment: Yes, the old memory argument. But is it valid? Is there a conceivable application where a dict of weak references would be storing a large chunk of the application memory? Remember, all of the data must be referred to from elsewhere, or else, the weak refs would not exist. An extra list of pointers is unlikely to make a difference. I think the chief reason to use iterators has to do with performance by avoiding the creation of temporary objects, not saving memory per-se. Before the invention of iteritems() and friends, all such iteration was by lists (and hence, memory usage). We should try to remain nimble enough so that we can undo an optimization previously done, if the requirements merit us doing so. As a completely unrelated example of such nimbleness: Faced with stricter regulations in the 70s, american car makers had to sell their muscle cars with increasingly less powerful engines, efectively rolling back previous optimizations :) Anyway, it's not for me to decide. We have currently three options: a) my first patch, which is a duplication of the 3.x work but is non-trivial and could bring stability issues b) my second patch, which will increase memory use, but to no more than previous versions of python used while iterating c) do nothing and have iterations over weak dicts randomly break when an underlying cycle is unraveled during iteration. Cheers! -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7105 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7105] weak dict iterators are fragile because of unpredictable GC runs
Kristján Valur Jónsson added the comment: Here's a different approach. Simply avoid the use of iterators over the underlying container. Instead, we iterate over lists of items/keys/values etc. -- Added file: http://bugs.python.org/file32932/weakref.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7105 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19787] tracemalloc: set_reentrant() should not have to call PyThread_delete_key()
Kristján Valur Jónsson added the comment: Deja vu, this has come up before. I wanted to change this because native TLS implementation become awkward. https://mail.python.org/pipermail/python-dev/2008-August/081847.html -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19787 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19787] tracemalloc: set_reentrant() should not have to call PyThread_delete_key()
Kristján Valur Jónsson added the comment: See also issue #10517 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19787 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19787] tracemalloc: set_reentrant() should not have to call PyThread_delete_key()
Kristján Valur Jónsson added the comment: But yes, I'd like to see this behave like normal. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19787 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19787] tracemalloc: set_reentrant() should not have to call PyThread_delete_key()
Kristján Valur Jónsson added the comment: Please see the rather long discussion in http://bugs.python.org/issue10517 There were issues having to do with fork. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19787 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19787] tracemalloc: set_reentrant() should not have to call PyThread_delete_key()
Kristján Valur Jónsson added the comment: Only that issue #10517 mentions reasons to keep the old behavior, specifically http://bugs.python.org/issue10517#msg134573 I don't know if any of the old arguments are still valid, but I suggested changing this years ago and there was always some objection or other. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue19787 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7105] weak dict iterators are fragile because of unpredictable GC runs
Kristján Valur Jónsson added the comment: No matter how it sounds, it certainly looks cleaner in code. Look at all this code, designed to work around an unexpected GC collection with various pointy bits and edge cases and special corners. Compare to explicitly just asking GC to relent, for a bit: def getitems(self): with gc.disabled(): for each in self.data.items(): yield each That's it. While a native implementation of such a context manager would be better (faster, and could be made overriding), a simple one can be constructed thus: @contextlib.contextmanagerd def gc_disabled(): enabled = gc.isenabled() gs.disable() try: yield finally: if enabled: gc.enable() Such global atomic context managers are well known to stackless programmers. It's a very common idiom when building higher level primitives (such as locks) from lower level ones. with stackless.atomic(): do() various() stuff_that_does_not_like_being_interrupted() (stackless.atomic prevents involuntary tasklet switching _and_ involuntary thread switching) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7105 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com