[issue20849] add exist_ok to shutil.copytree
New submission from Alexander Mohr: it would be REALLY nice (and REALLY easy) to add a parameter: exist_ok and pass this to os.makedirs with the same parameter name so you can use copytree to append a src dir to an existing dst dir. -- components: Library (Lib) messages: 212691 nosy: thehesiod priority: normal severity: normal status: open title: add exist_ok to shutil.copytree type: enhancement versions: Python 3.3, Python 3.4, Python 3.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20849 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20849] add exist_ok to shutil.copytree
Alexander Mohr added the comment: awesome, thanks so much!! -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20849 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20849] add exist_ok to shutil.copytree
Alexander Mohr added the comment: how about instead we rename the new parameter to dirs_exists_ok or something like that since the method already allows for existing files. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20849 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20849] add exist_ok to shutil.copytree
Alexander Mohr added the comment: I personally dont think this is worth investing the time for a discussion. If the maintainers dont want to accept this or a minor variation without a discussion ill just keep my local monkeypatch :) thanks again for the quick patch Elias! On Mar 8, 2014 4:03 PM, Elias Zamaria rep...@bugs.python.org wrote: Elias Zamaria added the comment: I am not sure. I am not on the python-ideas mailing list, and I am not sure what adding and maintaining the discussion would entail, or if I would have the time to do it or want to deal with the clutter in my inbox. I just committed this patch because it seemed like it would be quick and easy. I can start the discussion if anyone specifically wants me to, but I don't want to let anyone down. On Fri, Mar 7, 2014 at 1:41 PM, Éric Araujo rep...@bugs.python.org wrote: Éric Araujo added the comment: Contrary to makedirs, there could be two interpretations for exist_ok in copytree: a) if a directory or file already exists in the destination, ignore it and go ahead b) only do that for directories. The proposed patch does b), but the cp tool does a). It's not clear to me which is best. Can you start a discussion on the python-ideas mailing list? -- nosy: +eric.araujo ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20849 ___ -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20849 ___ -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20849 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20849] add exist_ok to shutil.copytree
Alexander Mohr added the comment: Ya. The original request I think is ok because by allowing that flag it will replace files and dirs. On Mar 14, 2014 7:15 PM, R. David Murray rep...@bugs.python.org wrote: R. David Murray added the comment: I don't know what the method already allows for existing files means. Since the target directory can't exist, there can be no existing files. In unix, this kind of capability is provided by a combination of shell globbing and 'cp -r', and by default it does replace existing files. So it would be reasonable for exists_ok to mean exactly that: replace anything that currently exists, if it does. I think that would be a reasonable API, but the implementation isn't as simple as just passing through the exists_ok flag to makedirs. I do not think that *just* making it OK for the destination directory to exist would be a good API. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20849 ___ -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20849 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20849] add exist_ok to shutil.copytree
Alexander Mohr added the comment: btw, I believe the solution is as simple as stated as that's what I'm doing locally and its behaving exactly as intended. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue20849 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25593] _sock_connect_cb can be called twice resulting in InvalidStateError
Alexander Mohr added the comment: Sorry for being obscure before, it was hard to pinpoint. I think I just figured it out! I had code like this in a subprocess: def worker(): while True: obj = self.queue.get() # do work with obj using asyncio http module def producer(): nonlocal self obj2 = self.queue.get() return obj2 workers = [] for i in range(FILE_OP_WORKERS): t = asyncio.ensure_future(worker()) t.add_done_callback(op_finished) workers.append(t) while True: f = loop.run_in_executor(None, producer) obj = loop.run_until_complete(f) t = async_queue.put(obj) loop.run_until_complete(t) loop.run_until_complete(asyncio.wait(workers)) where self.queue is a multiprocessing.Queue, and async_queue is an asyncio queue. The idea is that I have a process populating a multiprocessing queue, and I want to transfer it to an syncio queue while letting the workers do their thing. Without knowing the underlying behavior, my theory is that when python blocks on the multiprocessing queue lock, it releases socket events to the async http module's selectors, and then when the async loop gets to the selectors they're released again. If I switch the producer to instead use a queue.get_nowait and busy wait with asyncio.sleep I don't get the error...however this is not ideal is we're busy waiting. Thanks! -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25593] _sock_connect_cb can be called twice resulting in InvalidStateError
Alexander Mohr added the comment: clarification, adding the fut.done() check, or monkey patching: orig_sock_connect_cb = asyncio.selector_events.BaseSelectorEventLoop._sock_connect_cb def _sock_connect_cb(self, fut, sock, address): if fut.done(): return return orig_sock_connect_cb(self, fut, sock, address) -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25593] _sock_connect_cb can be called twice resulting in InvalidStateError
Alexander Mohr added the comment: Actually, I just realized I had fixed it locally by changing the callback to the following: 429 def _sock_connect_cb(self, fut, sock, address): 430 if fut.cancelled() or fut.done(): 431 return so a fix is still needed, and I also verified this happens with python3.4 as well. -- status: closed -> open versions: +Python 3.4 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25593] _sock_connect_cb can be called twice resulting in InvalidStateError
Alexander Mohr added the comment: I'm going to close this as I've found a work-around, if I find a better test-case I'll open a new bug. -- resolution: -> later status: open -> closed ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25593] _sock_connect_cb can be called twice resulting in InvalidStateError
New submission from Alexander Mohr: asyncio.selector_events.BaseSelectorEventLoop._sock_connect_cb is a callback based on the selector for a socket. There are certain situations when the selector triggers twice calling this callback twice, resulting in an InvalidStateError when it sets the Future to None. The way I triggered this was by having several parallel connections to the same host in a multiprocessing script. I suggest analyzing why this callback can be called twice and figuring out what the correct fix is. I monkey patched it by adding a fut.done() check at the top. If this information is not enough I can try to provide a sample script. Its currently reproducing in a fairly involved multiprocessing script. -- components: asyncio messages: 254433 nosy: gvanrossum, haypo, thehesiod, yselivanov priority: normal severity: normal status: open title: _sock_connect_cb can be called twice resulting in InvalidStateError type: behavior versions: Python 3.5 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25593] _sock_connect_cb can be called twice resulting in InvalidStateError
Alexander Mohr added the comment: attaching my simplified testcase and logged an aiohttp bug: https://github.com/KeepSafe/aiohttp/issues/633 -- Added file: http://bugs.python.org/file41018/test_app.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25593] _sock_connect_cb can be called twice resulting in InvalidStateError
Alexander Mohr added the comment: btw want to thank you guys for actively looking into this, I'm very grateful! -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25593] _sock_connect_cb can be called twice resulting in InvalidStateError
Alexander Mohr added the comment: self.queue is not an async queue, as I stated above its a multiprocessing queue. This code is to multiplex a multiprocessing queue to a async queue. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25593] _sock_connect_cb can be called twice resulting in InvalidStateError
Alexander Mohr added the comment: Perhaps I'm doing something really stupid, but I was able to reproduce the two issues I'm having with the following sample script. If you leave the monkey patch disabled, you get the InvalidStateError, if you enable it, you get the ServerDisconnect errors that I'm currently seeing which I work-around with retries. Ideas? import asyncio import aiohttp import multiprocessing import aiohttp.server import logging import traceback # Monkey patching import asyncio.selector_events # http://bugs.python.org/issue25593 if False: orig_sock_connect_cb = asyncio.selector_events.BaseSelectorEventLoop._sock_connect_cb def _sock_connect_cb(self, fut, sock, address): if fut.done(): return return orig_sock_connect_cb(self, fut, sock, address) asyncio.selector_events.BaseSelectorEventLoop._sock_connect_cb = _sock_connect_cb class HttpRequestHandler(aiohttp.server.ServerHttpProtocol): @asyncio.coroutine def handle_request(self, message, payload): response = aiohttp.Response(self.writer, 200, http_version=message.version) response.add_header('Content-Type', 'text/html') response.add_header('Content-Length', '18') response.send_headers() yield from asyncio.sleep(0.5) response.write(b'It Works!') yield from response.write_eof() def process_worker(q): loop = asyncio.get_event_loop() #loop.set_debug(True) connector = aiohttp.TCPConnector(force_close=False, keepalive_timeout=8, use_dns_cache=True) session = aiohttp.ClientSession(connector=connector) async_queue = asyncio.Queue(100) @asyncio.coroutine def async_worker(session, async_queue): while True: try: print("blocking on asyncio queue get") url = yield from async_queue.get() print("unblocking on asyncio queue get") print("get aqueue size:", async_queue.qsize()) response = yield from session.request('GET', url) try: data = yield from response.read() print(data) finally: yield from response.wait_for_close() except: traceback.print_exc() def producer(q): print("blocking on multiprocessing queue get") obj2 = q.get() print("unblocking on multiprocessing queue get") print("get qempty:", q.empty()) return obj2 def worker_done(f): try: f.result() print("worker exited") except: traceback.print_exc() workers = [] for i in range(100): t = asyncio.ensure_future(async_worker(session, async_queue)) t.add_done_callback(worker_done) workers.append(t) @asyncio.coroutine def doit(): print("start producer") obj = yield from loop.run_in_executor(None, producer, q) print("finish producer") print("blocking on asyncio queue put") yield from async_queue.put(obj) print("unblocking on asyncio queue put") print("put aqueue size:", async_queue.qsize()) while True: loop.run_until_complete(doit()) def server(): loop = asyncio.get_event_loop() #loop.set_debug(True) f = loop.create_server(lambda: HttpRequestHandler(debug=True, keep_alive=75), '0.0.0.0', '8080') srv = loop.run_until_complete(f) loop.run_forever() if __name__ == '__main__': q = multiprocessing.Queue(100) log_proc = multiprocessing.log_to_stderr() log_proc.setLevel(logging.DEBUG) p = multiprocessing.Process(target=process_worker, args=(q,)) p.start() p2 = multiprocessing.Process(target=server) p2.start() while True: print("blocking on multiprocessing queue put") q.put("http://0.0.0.0:8080;) print("unblocking on multiprocessing queue put") print("put qempty:", q.empty()) -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14119] Ability to adjust queue size in Executors
Alexander Mohr added the comment: adding support for internal queue size is critical to avoid chewing through all your memory when you have a LOT of tasks. I just hit this issue myself. If we could have a simple parameter to set the max queue size this would help tremendously! -- nosy: +thehesiod ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue14119> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25593] _sock_connect_cb can be called twice resulting in InvalidStateError
Alexander Mohr added the comment: update: its unrelated to the number of sessions or SSL, but instead to the number of concurrent aiohttp requests. When set to 500, I get the error, when set to 100 I do not. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25593] _sock_connect_cb can be called twice resulting in InvalidStateError
Alexander Mohr added the comment: I'm not sure if you guys are still listening on this closed bug but I think I've found another issue ;) I'm using python 3.5.1 + asyncio 3.4.3 with the latest aiobotocore (which uses aiohttp 0.21.0) and had two sessions (two TCPConnectors), one doing a multitude of GetObjects via HTTP1.1, and the other doing PutObject, and the PutObject session returns error 61 (connection refused) from the same _sock_connect_cb. It feels like a similar issue to the original. I'll see if I can get small testcase. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25593> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue21423] concurrent.futures.ThreadPoolExecutor/ProcessPoolExecutor should accept an initializer argument
Alexander Mohr added the comment: any chance if this getting into 3.5.2? I have some gross code to get around it (setting global properties) -- nosy: +thehesiod ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue21423> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23236] asyncio: add timeout to StreamReader read methods
Alexander Mohr added the comment: any updates on this? I think this would be perfect for https://github.com/aio-libs/aiobotocore/issues/31 -- nosy: +thehesiod ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue23236> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29406] asyncio SSL contexts leak sockets after calling close with certain Apache servers
Alexander Mohr added the comment: updating to make default the error case (madis) -- Added file: http://bugs.python.org/file46470/scratch_1.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29406> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29406] asyncio SSL contexts leak sockets after calling close with certain Apache servers
Changes by Alexander Mohr <thehes...@gmail.com>: Removed file: http://bugs.python.org/file46469/scratch_1.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29406> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29406] asyncio SSL contexts leak sockets after calling close with certain Apache servers
New submission from Alexander Mohr: with the attached code note how HttpClient.connection_lost callback is never called for the madis server. The madis server is an apache server, I tried with the OSX apache server and could not reproduce the issue so it seems something particular about their apache version or configuration. This is a pretty critical issue as close() does not release the socket. -- components: asyncio files: scratch_1.py messages: 286573 nosy: gvanrossum, thehesiod, yselivanov priority: normal severity: normal status: open title: asyncio SSL contexts leak sockets after calling close with certain Apache servers type: resource usage versions: Python 3.5 Added file: http://bugs.python.org/file46469/scratch_1.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29406> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29406] asyncio SSL contexts leak sockets after calling close with certain Apache servers
Alexander Mohr added the comment: Thanks so much for the patch! may want to change spelling of what was supposed to be "shutdown" =) Also think it's worth a comment stating why it's needed? Like certain Apache servers were noticed to not complete the SSL shutdown process. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29406> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29302] add contextlib.AsyncExitStack
Changes by Alexander Mohr <thehes...@gmail.com>: Removed file: http://bugs.python.org/file46322/exit_stack.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29302> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29302] add contextlib.AsyncExitStack
Alexander Mohr added the comment: created gist: https://gist.github.com/thehesiod/b8442ed50e27a23524435a22f10c04a0 I've now updated the imple to support both __aenter__/__aexit__ and __enter__/__exit__ so I don't need two ExitStacks -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29302> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22087] asyncio: support multiprocessing (support fork)
Alexander Mohr added the comment: I believe this is now worse due to https://github.com/python/asyncio/pull/452 before I was able to simply create a new run loop from sub-processes however you will now get the error "Cannot run the event loop while another loop is running". The state of the run loop should not be preserved in sub-processes either. -- nosy: +thehesiod versions: +Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue22087> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29302] add contextlib.AsyncExitStack
New submission from Alexander Mohr: ExitStack is a really useful class and would be a great to have an async version. I've gone ahead and created an implementation based on the existing Python 3.5.2 implementation. Let me know what you guys think. I think it would be possible to combine most of the two classes together if you guys think it would be useful. Let me know if I can/should create a github PR and where to do that. -- components: Library (Lib) files: exit_stack.py messages: 285687 nosy: thehesiod priority: normal severity: normal status: open title: add contextlib.AsyncExitStack type: enhancement versions: Python 3.6, Python 3.7 Added file: http://bugs.python.org/file46322/exit_stack.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29302> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29302] add contextlib.AsyncExitStack
Alexander Mohr added the comment: Thanks for the feedback Nick! If I get a chance I'll see about refactoring my gist into a base class and two sub-classes with the async supporting non-async but not vice-versa. I think it will be cleaner. Sorry I didn't spend too much effort on the existing gist as I tried quickly layering on async support to move on to my primary task. After doing that I noticed that the code could use some refactoring, but only after taking the time deconstructing the impl to understand how the code works. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29302> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28342] OSX 10.12 crash in urllib.request getproxies_macosx_sysconf and proxy_bypass_macosx_sysconf
Alexander Mohr added the comment: ya I did a monkey patch which resolved it. if sys.platform == 'darwin': import botocore.vendored.requests.utils, urllib.request botocore.vendored.requests.utils.proxy_bypass = urllib.request.proxy_bypass_environment botocore.vendored.requests.utils.getproxies = urllib.request.getproxies_environment urllib.request.proxy_bypass = urllib.request.proxy_bypass_environment urllib.request.getproxies = urllib.request.getproxies_environment -- status: pending -> open ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28342> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28342] OSX 10.12 crash in urllib.request getproxies_macosx_sysconf and proxy_bypass_macosx_sysconf
Alexander Mohr added the comment: I'm sure it would work, I just wanted a solution that didn't changes to our build infrastructure. btw if we're marking this as a duplicate of the other bug, can we update the other bug to say it affects python3.x as well? Thanks! -- status: pending -> open ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28342> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28342] OSX 10.12 crash in urllib.request getproxies_macosx_sysconf and proxy_bypass_macosx_sysconf
Alexander Mohr added the comment: interestingly I haven't been able to get this to crash in a separate test app. There must be either timing related to some interaction with another module. let me know how you guys would like to proceed. I can definitely reproduce it consistently in our application. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28342> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue28342] OSX 10.12 crash in urllib.request getproxies_macosx_sysconf and proxy_bypass_macosx_sysconf
New submission from Alexander Mohr: I have a unittest which spawns several processes repeatedly. One of these subprocesses uses botocore, which itself uses the above two methods through the calls proxy_bypass and getproxies. It seems after re-spawning the methods a few times the titled calls eventually repeatedly cause python to crash on 10.12. I have a core file if that would help, zip'd it's ~242MB. I've attached a file that shows the lldb callstack and python callstack. -- components: Library (Lib) files: python_urllib_crash.txt messages: 277917 nosy: thehesiod priority: normal severity: normal status: open title: OSX 10.12 crash in urllib.request getproxies_macosx_sysconf and proxy_bypass_macosx_sysconf versions: Python 3.5 Added file: http://bugs.python.org/file44940/python_urllib_crash.txt ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue28342> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29302] add contextlib.AsyncExitStack
Alexander Mohr added the comment: ok I've updated the gist with a base class and sync + async sub-classes. The way it worked out I think is nice because we can have the same method names across both sync+async. Let me know what you guys think! btw, it seems the test_dont_reraise_RuntimeError test hangs even with the release version. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29302> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29870] ssl socket leak
Alexander Mohr added the comment: adding valgrind log of 3.5.3 on debian: jessie -- Added file: http://bugs.python.org/file46750/valgrind.log.gz ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29870> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29870] ssl socket leak
Alexander Mohr added the comment: interestingly the valgrind run doesn't show a leak in the profile -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29870> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29870] ssl socket leak
New submission from Alexander Mohr: When upgrading to 3.5.3 we noticed that the requests module was leaking memory rather quickly. This led to me logging the issue: https://github.com/kennethreitz/requests/issues/3933. After more investigation I've found that the leak is caused by the raw python SSL sockets. I've created a test file here: https://gist.github.com/thehesiod/ef79dd77e2df7a0a7893dfea6325d30a which allows you to reproduce the leak with raw python ssl socket (CLIENT_TYPE = ClientType.RAW), aiohttp or requests. They all leak in a similar way due to their use of the python SSL socket objects. I tried tracing the memory usage with tracemalloc but nothing interesting popped up so I believe this is a leak in the native code. A docker cloud image is available here: amohr/testing:stretch_request_leak based on: ``` FROM debian:stretch COPY request_https_leak.py /tmp/request_https_leak.py RUN apt-get update && \ apt-get install -y python3.5 python3-pip git RUN python3 -m pip install requests git+git://github.com/thehesiod/pyca.git@fix-py3#egg=calib setproctitle requests psutil ``` I believe this issue was introduced in python 3.5.3 as we're not seeing the leak with 3.5.2. Also I haven't verified yet if this happens on non-debian systems. I'll update if I have any more info. I believe 3.6 is similarly impacted but am not 100% certain yet. -- assignee: christian.heimes components: SSL messages: 289954 nosy: christian.heimes, thehesiod priority: normal severity: normal status: open title: ssl socket leak type: resource usage versions: Python 3.5 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29870> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29870] ssl socket leak
Alexander Mohr added the comment: validated 3.6 in fedora is affected as well, see github bug for charts. So it seems all 3.5.3+ versions are affected. I'm guessing it was introduced in one of the SSL changes in 3.5.3: https://docs.python.org/3.5/whatsnew/changelog.html#python-3-5-3 -- versions: +Python 3.6 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29870> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29870] ssl socket leak
Alexander Mohr added the comment: yes, in the gist I created you can switch between the various clients, by default right now it uses raw sockets. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29870> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29870] ssl socket leak
Alexander Mohr added the comment: @pitrou: sys.getallocatedblocks does not seem to increase -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29870> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29870] ssl socket leak
Alexander Mohr added the comment: ya, my sample script hits google.com <http://google.com/>, it's pretty fast. It just does a "HEAD". > On Apr 11, 2017, at 9:14 AM, Antoine Pitrou <rep...@bugs.python.org> wrote: > > > Antoine Pitrou added the comment: > > Is there a fast enough remote server that shows the leak? I've tested with my > own remote server (https://pitrou.net/), but it doesn't leak. > > -- > > ___ > Python tracker <rep...@bugs.python.org> > <http://bugs.python.org/issue29870> > ___ -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29870> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29870] ssl socket leak
Alexander Mohr added the comment: the interesting part is it doesn't leak with a local https server, it appears to need to be a remove server. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29870> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29870] ssl socket leak
Alexander Mohr added the comment: see graphs here: https://github.com/kennethreitz/requests/issues/3933, x-axis is number of requests not what it says (seconds). -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29870> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29870] ssl socket leak
Alexander Mohr added the comment: awesome! Thanks for finding a proposing fix pitrou! btw I found an example of freeing this structure here: http://www.zedwood.com/article/c-openssl-parse-x509-certificate-pem -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29870> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: another core had a different gc object: $1 = {ob_base = {ob_base = {_ob_next = 0x7f801eac3158, _ob_prev = 0x7f801eab95a0, ob_refcnt = 41, ob_type = 0x7f80238e76e0 }, ob_size = 0}, tp_name = 0x7f801e8967af "_asyncio.Task", tp_basicsize = 128, tp_itemsize = 0, tp_dealloc = 0x7f801e8926e5 , tp_print = 0x0, tp_getattr = 0x0, tp_setattr = 0x0, tp_as_async = 0x7f801ea99720 , tp_repr = 0x7f801e88fa9b , tp_as_number = 0x0, tp_as_sequence = 0x0, tp_as_mapping = 0x0, tp_hash = 0x7f802356b995 <_Py_HashPointer>, tp_call = 0x0, tp_str = 0x7f8023442d05 , tp_getattro = 0x7f802341dc8b , tp_setattro = 0x7f802341e0b5 , tp_as_buffer = 0x0, tp_flags = 807937, tp_doc = 0x7f801ea98bc0 <_asyncio_Task___initdoc__> "Task(coro, *, loop=None)\n--\n\nA coroutine wrapped in a Future.", tp_traverse = 0x7f801e891658 , tp_clear = 0x7f801e89150b , tp_richcompare = 0x7f8023442d42 , tp_weaklistoffset = 96, tp_iter = 0x7f801e890d4f , tp_iternext = 0x0, tp_methods = 0x7f801ea99b20 , tp_members = 0x0, tp_getset = 0x7f801ea99d40 , tp_base = 0x7f801ea9a3c0 , tp_dict = 0x7f801eac2238, tp_descr_get = 0x0, tp_descr_set = 0x0, tp_dictoffset = 88, tp_init = 0x7f801e88d84d <_asyncio_Task___init__>, tp_alloc = 0x7f802343a7f8 , tp_new = 0x7f802343a9c6 , tp_free = 0x7f80235a2d8b , tp_is_gc = 0x0, tp_bases = 0x7f801eab95a0, tp_mro = 0x7f801eabc508, tp_cache = 0x0, tp_subclasses = 0x0, tp_weaklist = 0x7f801eac3458, tp_del = 0x0, tp_version_tag = 4303, tp_finalize = 0x7f801e8922fd } -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: I'm hoping this is the fix: --- Modules/_asynciomodule.c.orig 2017-07-31 12:16:16.0 -0700 +++ Modules/_asynciomodule.c2017-07-31 13:08:52.0 -0700 @@ -953,15 +953,18 @@ FutureObj_dealloc(PyObject *self) { FutureObj *fut = (FutureObj *)self; +PyObject_GC_UnTrack(self); if (Future_CheckExact(fut)) { /* When fut is subclass of Future, finalizer is called from * subtype_dealloc. */ +_PyObject_GC_TRACK(self); if (PyObject_CallFinalizerFromDealloc(self) < 0) { // resurrected. return; } +_PyObject_GC_UNTRACK(self); } if (fut->fut_weakreflist != NULL) { @@ -1828,14 +1831,18 @@ { TaskObj *task = (TaskObj *)self; +PyObject_GC_UnTrack(self); + if (Task_CheckExact(self)) { /* When fut is subclass of Task, finalizer is called from * subtype_dealloc. */ + _PyObject_GC_TRACK(self); if (PyObject_CallFinalizerFromDealloc(self) < 0) { // resurrected. return; } +_PyObject_GC_UNTRACK(self); } if (task->task_weakreflist != NULL) { -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: oh, so this is looking like an asyncio issue, the "gc" that is causing the crash is: (gdb) print *FROM_GC(gc)->ob_type $8 = {ob_base = {ob_base = {_ob_next = 0x7f80c8aafc88, _ob_prev = 0x7f80c8aafd00, ob_refcnt = 7, ob_type = 0x7f80cd8c86e0 }, ob_size = 0}, tp_name = 0x7f80c8aa5c38 "_GatheringFuture", tp_basicsize = 104, tp_itemsize = 0, tp_dealloc = 0x7f80cd41bee7 , tp_print = 0x0, tp_getattr = 0x0, tp_setattr = 0x0, tp_as_async = 0x556ba4342d58, tp_repr = 0x7f80c8870a9b , tp_as_number = 0x556ba4342d70, tp_as_sequence = 0x556ba4342ea8, tp_as_mapping = 0x556ba4342e90, tp_hash = 0x7f80cd54c995 <_Py_HashPointer>, tp_call = 0x0, tp_str = 0x7f80cd423d05 , tp_getattro = 0x7f80cd3fec8b , tp_setattro = 0x7f80cd3ff0b5 , tp_as_buffer = 0x556ba4342ef8, tp_flags = 808449, tp_doc = 0x7f80c8cd7380 "Helper for gather().\n\nThis overrides cancel() to cancel all the children and act more\nlike Task.cancel(), which doesn't immediately mark itself as\ncancelled.\n", tp_traverse = 0x7f80cd41baae , tp_clear = 0x7f80cd41bd5c , tp_richcompare = 0x7f80cd423d42 , tp_weaklistoffset = 96, tp_iter = 0x7f80c8871d4f , tp_iternext = 0x7f80cd3fe6d6 <_PyObject_NextNotImplemented>, tp_methods = 0x0, tp_members = 0x556ba4342f28, tp_getset = 0x0, tp_base = 0x7f80c8a7b3c0 , tp_dict = 0x7f80c8aafc88, tp_descr_get = 0x0, tp_descr_set = 0x0, tp_dictoffset = 88, tp_init = 0x7f80cd431000 , tp_alloc = 0x7f80cd41b7f8 , tp_new = 0x7f80cd41b9c6 , tp_free = 0x7f80cd583d8b , tp_is_gc = 0x0, tp_bases = 0x7f80c8ab20c0, tp_mro = 0x7f80c8aafd00, tp_cache = 0x0, tp_subclasses = 0x0, tp_weaklist = 0x7f80c8aae5d8, tp_del = 0x0, tp_version_tag = 791, tp_finalize = 0x7f80c8870ddb } note: it's a _GatheringFuture. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26617] Assertion failed in gc with __del__ and weakref
Alexander Mohr added the comment: so I just discovered that the object that has the zero refcount has the same tp_dealloc: (gdb) print *FROM_GC(gc)->ob_type $8 = {ob_base = {ob_base = {_ob_next = 0x7f80c8aafc88, _ob_prev = 0x7f80c8aafd00, ob_refcnt = 7, ob_type = 0x7f80cd8c86e0 }, ob_size = 0}, tp_name = 0x7f80c8aa5c38 "_GatheringFuture", tp_basicsize = 104, tp_itemsize = 0, tp_dealloc = 0x7f80cd41bee7 , tp_print = 0x0, tp_getattr = 0x0, tp_setattr = 0x0, tp_as_async = 0x556ba4342d58, tp_repr = 0x7f80c8870a9b , tp_as_number = 0x556ba4342d70, tp_as_sequence = 0x556ba4342ea8, tp_as_mapping = 0x556ba4342e90, tp_hash = 0x7f80cd54c995 <_Py_HashPointer>, tp_call = 0x0, tp_str = 0x7f80cd423d05 , tp_getattro = 0x7f80cd3fec8b , tp_setattro = 0x7f80cd3ff0b5 , tp_as_buffer = 0x556ba4342ef8, tp_flags = 808449, tp_doc = 0x7f80c8cd7380 "Helper for gather().\n\nThis overrides cancel() to cancel all the children and act more\nlike Task.cancel(), which doesn't immediately mark itself as\ncancelled.\n", tp_traverse = 0x7f80cd41baae , tp_clear = 0x7f80cd41bd5c , tp_richcompare = 0x7f80cd423d42 , tp_weaklistoffset = 96, tp_iter = 0x7f80c8871d4f , tp_iternext = 0x7f80cd3fe6d6 <_PyObject_NextNotImplemented>, tp_methods = 0x0, tp_members = 0x556ba4342f28, tp_getset = 0x0, tp_base = 0x7f80c8a7b3c0 , tp_dict = 0x7f80c8aafc88, tp_descr_get = 0x0, tp_descr_set = 0x0, tp_dictoffset = 88, tp_init = 0x7f80cd431000 , tp_alloc = 0x7f80cd41b7f8 , tp_new = 0x7f80cd41b9c6 , tp_free = 0x7f80cd583d8b , tp_is_gc = 0x0, tp_bases = 0x7f80c8ab20c0, tp_mro = 0x7f80c8aafd00, tp_cache = 0x0, tp_subclasses = 0x0, tp_weaklist = 0x7f80c8aae5d8, tp_del = 0x0, tp_version_tag = 791, tp_finalize = 0x7f80c8870ddb } This is for a GatheringFuture, something tells me perhaps there is more to this function that needs to be resolved? -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26617> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31095] Checking all tp_dealloc with Py_TPFLAGS_HAVE_GC
Alexander Mohr added the comment: omg I just realized I need the default dict one too, great investigation work! -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31095> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
New submission from Alexander Mohr: I have a project in a prod environment which heavily uses asyncio and a threadpool. It uses the threadpool to run CPU heavy tasks (in this case populating a defaultdict) to avoid blocking the main thread (no async code in thread). For some time now my service has been randomly crashing at the same place in the thread which does the dict updating. I've finally got both the python and native stack traces, and based on the information presented it looked very similar to the issue found by the devs at home-assistant (https://github.com/home-assistant/home-assistant/issues/7752#issuecomment-30519, which points to https://github.com/home-assistant/home-assistant/pull/7848). So I tried their fix of disabling the "_asyncio" module, and lo and behold python no longer segfaults. Per the stacktrace it's crashing in PyObject_GC_Del, and the only place this is used in the asyncio module seems to be here: https://github.com/python/cpython/blob/master/Modules/_asynciomodule.c#L996 does anyone have any idea why it's crashing on this line? Are there thread protections missing in this file? I'm trying to reproduce this in a testcase but it's proving very difficult as I'm guessing it's timing related. -- components: asyncio files: native___python_crash_stacks.txt messages: 299346 nosy: thehesiod, yselivanov priority: normal severity: normal status: open title: asyncio segfault when using threadpool and "_asyncio" native module type: crash versions: Python 3.5, Python 3.6 Added file: http://bugs.python.org/file47043/native___python_crash_stacks.txt ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: btw I've seen this issue in 3.5.2 + 3.6.2 on debian jessie + stretch -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: bad news, I just got a crash in the same place (updating defaultdict) after running for a week with the fixes from this and inada naoki's patches. I think the threadpool may be leaking threads too as I had > 40 threads after running for a week when I use no more than ~10. I'm going to switch to debug build and will update when I get more details. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26617] Assertion failed in gc with __del__ and weakref
Alexander Mohr added the comment: I'm tracking something very similar issue to this in bug: http://bugs.python.org/issue31061 Given its similarities, anyone have any ideas? Based on the second callstack I'm starting to think this is an issue with defaultdict -- nosy: +thehesiod ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue26617> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: btw got slightly difference stacktrace on second core file -- Added file: http://bugs.python.org/file47051/python crash2.txt ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: the problem with this crash is that it only happens periodically in our prod environment :( If I try running the exact same docker container with the same inputs locally it doesn't reproduce, so frustrating. I've created a whole workflow now for deploying with a debug python to get a core file with symbols. Hopefully have some more info w/in a day. Thanks for the tips! -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: so looks like disabling the _asyncio module just caused the crash to happen less often, closing and will continue investigating after a get a core file -- stage: -> resolved status: open -> closed ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: ok got a full debug core file, let me know what other information I can provide. -- status: closed -> open Added file: http://bugs.python.org/file47049/python crash.txt ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: this is the comment on the assert: /* Python's cyclic gc should never see an incoming refcount * of 0: if something decref'ed to 0, it should have been * deallocated immediately at that time. * Possible cause (if the assert triggers): a tp_dealloc * routine left a gc-aware object tracked during its teardown * phase, and did something-- or allowed something to happen -- * that called back into Python. gc can trigger then, and may * see the still-tracked dying object. Before this assert * was added, such mistakes went on to allow gc to try to * delete the object again. In a debug build, that caused * a mysterious segfault, when _Py_ForgetReference tried * to remove the object from the doubly-linked list of all * objects a second time. In a release build, an actual * double deallocation occurred, which leads to corruption * of the allocator's internal bookkeeping pointers. That's * so serious that maybe this should be a release-build * check instead of an assert? */ I've also attached a file that's similar to the code we run in production, however couldn't get it to reproduce the crash. In the datafile it uses it has some tuples like the following: StationTuple = namedtuple('StationTuple', ['stationname', 'stationsubtype', 's2id']) -- Added file: http://bugs.python.org/file47050/python_crash.py ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: hmm, how would I do that? btw I'm not 100% sure this is due to asyncio. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: ok, created: https://github.com/python/cpython/pull/2966 there are some other deallocs in there, mind verifying the rest? -- pull_requests: +3014 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31095] Checking all tp_dealloc with Py_TPFLAGS_HAVE_GC
Alexander Mohr added the comment: I suggest any places that don't need the calls should have comments so that future reviewers know why. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31095> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31095] Checking all tp_dealloc with Py_TPFLAGS_HAVE_GC
Alexander Mohr added the comment: actually another idea: could the PR for this also update https://docs.python.org/2/c-api/typeobj.html#c.PyTypeObject.tp_dealloc to mention about these macros and when they should be used? That, along with all the other locations correctly calling these macros, and having comments when they're not needed hopefully should prevent this from happening again. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31095> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31095] Checking all tp_dealloc with Py_TPFLAGS_HAVE_GC
Alexander Mohr added the comment: should the base method which calls tp_dealloc do this? Maybe can kill all birds with one stone. -- nosy: +thehesiod ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31095> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: I've verified that this along with the changes in 31095 resolve the crashes I've been seeing in our production environment -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31061] asyncio segfault when using threadpool and "_asyncio" native module
Alexander Mohr added the comment: hmm, may be my fault due to docker image tagging issue. Will redeploy and update if the issue persists. If I don't reply again sorry for the noise. -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31061> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30698] asyncio sslproto do not shutdown ssl layer cleanly
Changes by Alexander Mohr <thehes...@gmail.com>: -- nosy: +thehesiod ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue30698> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1025395] email.Utils.parseaddr fails to parse valid addresses
Alexander Mohr added the comment: looks like these were meant to be internal methods, retracting new issues -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue1025395> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1025395] email.Utils.parseaddr fails to parse valid addresses
Changes by Alexander Mohr <thehes...@gmail.com>: -- nosy: +thehesiod versions: +Python 3.6, Python 3.7 ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue1025395> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1025395] email.Utils.parseaddr fails to parse valid addresses
Alexander Mohr added the comment: from 3.6: >>> AddrlistClass('John Smith <john.smith(comment)@example.org>').getcomment() '' >>> AddrlistClass('John Smith <john.smith(comment)@example.org>').getdomain() 'JohnSmith' totally messed up :) -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue1025395> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue31095] Checking all tp_dealloc with Py_TPFLAGS_HAVE_GC
Alexander Mohr added the comment: my vote is yes due to the defaultdict issue. We were hitting this in our prod env -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue31095> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29302] add contextlib.AsyncExitStack
Alexander Mohr added the comment: let me know if I need to do anything -- ___ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue29302> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Change by Alexander Mohr <thehes...@gmail.com>: -- type: -> behavior ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
New submission from Alexander Mohr <thehes...@gmail.com>: while investigating https://github.com/boto/botocore/issues/1464 I used tracemalloc (like I've done before in 3.5.2) to try to figure out where the leak was. To my surprise tracemalloc listed stacks that didn't make any sense. Strangest example is the top result when running the attached script against python 3.6.5 in the following manner: PYTHONMALLOC=malloc /valgrind/bin/python3 /tmp/test.py head_object The top hit is listed as: 21 memory blocks: 4.7 KiB File "/tmp/test.py", line 28 raise File "/tmp/test.py", line 47 test(s3_client) File "/tmp/test.py", line 65 main() how is it that the "raise" is a leak? It doesn't make any sense to me specially given that no contexts are used in that call. Further that line is never hit because the exception is not re-thrown. Further a bunch of regular expression allocs don't make any sense either given that I've cleared the cache before doing snapshots. if someone could shed some light on why this is/isn't a bug that would be great. It seems to me that the callstacks are not related at all to the leak. -- components: Library (Lib) files: tracemalloc_test.py messages: 317002 nosy: thehesiod priority: normal severity: normal status: open title: strange tracemalloc results versions: Python 3.6 Added file: https://bugs.python.org/file47600/tracemalloc_test.py ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: I realize it doesn't track leaks, it's a tool to help find leaks when used correctly :) This example should be similar to using the compare snapshots mechanism as I start tracking from a stable point (several iterations in after a gc), and then compare to another stable point several iterations later. I have a much more complicated set-up at our company but wanted to keep the example as short as people complain here about large examples. Further I realize how tracemalloc works, I have a lot of experience in leak hunting from my c++ days, I've even written my own native version of tracemalloc before (it's not hard). The top stat is what bubbles up as the largest leak after a number of runs, that's why the results are so peculiar. I've used tracemalloc before to find https://bugs.python.org/issue29870 in 3.5.2 and there the results made sense, here it makes no sense. To my understanding there should not be any interned strings or other items that would cause this particular callstack to be the top hit of unreleased blocks of memory (leaks). I still don't see any credible reason from why that callstack would be returned. I still believe there's a real bug here, perhaps there's a leak inside the python interpreter implementation it's trying to point out? I think it's worth investigating. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: here's another problem, if I change that function signature to: def test(s3_client): try: method = getattr(s3_client, sys.argv[1]) method(Bucket='archpi.dabase.com', Key='doesnotexist') except botocore.exceptions.ClientError as e: if e.response['ResponseMetadata']['HTTPStatusCode'] != 404: raise print('.', end='', flush=True) the print happens every time and the raise does not ever get hit, yet the raise line is still marked as the top "leak" (unfreed blocks). It's really strange. Furthermore the second top "leak" is this stack: File "/tmp/Python-3.6.5/Lib/sre_compile.py", line 439 return prefix, prefix_skip, False File "/tmp/Python-3.6.5/Lib/sre_compile.py", line 498 prefix, prefix_skip, got_all = _get_literal_prefix(pattern) File "/tmp/Python-3.6.5/Lib/sre_compile.py", line 548 _compile_info(code, p, flags) which doesn't make sense either, why would the return line be marked as having allocated anything? and btw I did change the implementation to start early, and then compare differences between a snapshot at iter 20 and iter 100 and the same holds true as I stated, there should logically be no difference between starting it later, or starting in the beginning and comparing two snapshots as all what you're doing in either cases is comparing two -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: so based on my last comment I just realized we could easily see if something was amiss by comparing results from python 3.5.2 to 3.6.5 and low and behold the callstack in question does not appear in the tracemalloc results from 3.5.2. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: actually it does show in 3.5.2, it doesn't show when using get_object, but it does when using head_object, and they both throw the same exception, so weird. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33526] hashlib leak on import
New submission from Alexander Mohr <thehes...@gmail.com>: I'm seeing a lot of leaks via valgrind against the hashlib module. It appears that it's calling OpenSSL_add_all_digests(); on init, and never calling the corresponding EVP_Cleanup on free: https://www.openssl.org/docs/man1.1.0/crypto/OpenSSL_add_all_digests.html. I see a ton of leaks like the following: ==27765== 24 bytes in 1 blocks are still reachable in loss record 13 of 10,294 ==27765==at 0x4C28C7B: malloc (vg_replace_malloc.c:299) ==27765==by 0xA92E337: CRYPTO_malloc (in /usr/lib64/libcrypto.so.1.0.2k) ==27765==by 0xA9E325A: lh_insert (in /usr/lib64/libcrypto.so.1.0.2k) ==27765==by 0xA93103E: OBJ_NAME_add (in /usr/lib64/libcrypto.so.1.0.2k) ==27765==by 0xA9F3559: OpenSSL_add_all_digests (in /usr/lib64/libcrypto.so.1.0.2k) ==27765==by 0xA44CF02: PyInit__hashlib (_hashopenssl.c:998) ==27765==by 0x506E627: _PyImport_LoadDynamicModuleWithSpec (importdl.c:154) ==27765==by 0x506DBA7: _imp_create_dynamic_impl (import.c:2008) ==27765==by 0x5067A2A: _imp_create_dynamic (import.c.h:289) ==27765==by 0x4F3061A: PyCFunction_Call (methodobject.c:114) ==27765==by 0x503E10C: do_call_core (ceval.c:5074) ==27765==by 0x5035F30: _PyEval_EvalFrameDefault (ceval.c:3377) ==27765==by 0x502280F: PyEval_EvalFrameEx (ceval.c:718) ==27765==by 0x503A944: _PyEval_EvalCodeWithName (ceval.c:4139) ==27765==by 0x503DA4D: fast_function (ceval.c:4950) ==27765==by 0x503D3FC: call_function (ceval.c:4830) ==27765==by 0x5035563: _PyEval_EvalFrameDefault (ceval.c:3295) ==27765==by 0x502280F: PyEval_EvalFrameEx (ceval.c:718) ==27765==by 0x503D70D: _PyFunction_FastCall (ceval.c:4891) ==27765==by 0x503D922: fast_function (ceval.c:4926) ==27765==by 0x503D3FC: call_function (ceval.c:4830) ==27765==by 0x5035563: _PyEval_EvalFrameDefault (ceval.c:3295) ==27765==by 0x502280F: PyEval_EvalFrameEx (ceval.c:718) ==27765==by 0x503D70D: _PyFunction_FastCall (ceval.c:4891) ==27765==by 0x503D922: fast_function (ceval.c:4926) ==27765==by 0x503D3FC: call_function (ceval.c:4830) ==27765==by 0x5035563: _PyEval_EvalFrameDefault (ceval.c:3295) ==27765==by 0x502280F: PyEval_EvalFrameEx (ceval.c:718) ==27765==by 0x503D70D: _PyFunction_FastCall (ceval.c:4891) ==27765==by 0x503D922: fast_function (ceval.c:4926) ==27765==by 0x503D3FC: call_function (ceval.c:4830) ==27765==by 0x5035563: _PyEval_EvalFrameDefault (ceval.c:3295) ==27765==by 0x502280F: PyEval_EvalFrameEx (ceval.c:718) ==27765==by 0x503D70D: _PyFunction_FastCall (ceval.c:4891) ==27765==by 0x503D922: fast_function (ceval.c:4926) I'm not exactly sure how this is happening yet (I know the code I use does a __import__ and uses multiple threads). It sounds like this call should be ref-counted or perhaps only done once for the life of the application. -- components: Extension Modules messages: 316723 nosy: thehesiod priority: normal severity: normal status: open title: hashlib leak on import type: resource usage versions: Python 3.6 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33526> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: that's not going to affect http://pytracemalloc.readthedocs.io/api.html#get_traced_memory. There is no filter for that :) as to your sum that's exactly what my original callstack lists: 21 memory blocks: 4.7 KiB this means 21 blocks were not released, and in this case leaked because nothing should be held onto after the first iteration (creating the initial connector in the connection pool. In the head object case that's going to be a new connector per iteration, however the old one should go away. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: I believe your method is flawed, when enabling tracemalloc it's going to be using memory as well to store the traces. I still believe you need to use the method I mentioned and further even if we don't take into account the total memory the traces I mentioned need to be explained. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: INADA Naoki: Unfortunately you'll need to use credentials from a free AWS account: https://aws.amazon.com/free/. Then create a credentials file in ~/.aws/credentials: https://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: ok fair enough, however a caveat is that it's not necessarily end of function as I was able to expand the function by a few more lines and the callstack stayed the same, however I'm guessing the interpreter was smart enough to realize the calls below the try/except had no references. it would be nice if there was a virtual "socket.__del__` or something at the end of the stack. Basically if it could plug into the extension callbacks. That way we could have a little more visibility. closing, thanks guys. This fixes the issue in botocore, on to the next related leak found via aiobotocore in aiohttp, where it now has no tracemalloc entries, so I'm guessing a leak via the ssl module to openssl :( thanks again guys for the help, I really appreciate it, I hope in the future, my some mechanism, scenarios like these will be a lot easier to decipher. -- resolution: -> not a bug stage: -> resolved status: open -> closed ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: this is how my friends talk here, see: https://english.stackexchange.com/questions/11816/is-guy-gender-neutral -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: of course, sorry, I meant in a gender neutral way -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: yes, memory does go up. If you click the botocore bug link you'll see a graph of memory usage over time. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: here's a version that tries to do something similar but does not reproduce the issue -- Added file: https://bugs.python.org/file47602/tracemalloc_test2.py ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33526] hashlib leak on import
Alexander Mohr <thehes...@gmail.com> added the comment: closing as I'm not quite sure this is right -- stage: -> resolved status: open -> closed ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33526> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27535] Ignored ResourceWarning warnings leak memory in warnings registries
Alexander Mohr <thehes...@gmail.com> added the comment: not fixing this means that 3.6 slowly leaks for many people in prod. It's not often possible to fix all the warnings on large dynamic applications, I highly suggest finding a way to get this into 3.6. I bet there are a lot of frustrated people out there who don't know why their applications slowly leak. -- nosy: +thehesiod ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue27535> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: I'll try with that fix and see if the leak is fixed, is the reasoning that if the warning happens after the try/except scope and that's why the callstack has it? -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: Ok I've verified that the patch does indeed fix the leak detected. Thank you very much INADA for knowing that there was a leak in the warnings module, I would have never guessed, especially given the tracemalloc stack given. Had it showed a callstack where the warning was created it would have made a lot more sense. I agree this can be closed, however can the leak fix PLEASE be put into 3.6 (any any other version that needs it)? Who cares if warnings are 1.4x slower with the fix? Are you going to rationally tell me that keeping warnings fast is more important than fixing leaks? In most applications there should be no warnings so it doesn't really matter. This particular leak was causing our application to fail after running for a few days which makes it unusable in production. It's caused me a lot of days wasted in investigation. If speed was really a problem that would have been a much worthier thing to spend time on than finding leaks. leaks should be highest priority, then speed. No rational developer would have complained that warnings got slower, that's when you fix warnings, not because of leaks! :) -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33565] strange tracemalloc results
Alexander Mohr <thehes...@gmail.com> added the comment: actually, thinking about this more on my way to work, this should NOT be closed. The callstack I initially mentioned still has no explanation and we now know is not correct. It should either have listed something related to warnings, or nothing at all () or something like that. There is no documentation to describe this behavior, think about it, it would have to be something like: "tracemalloc may give completely irrelevant callstacks". So I think this callstack still needs to be explained, and either: 1) the module should be fixed so it would give something more relevant (to give developers some foothold to realize this was related to warnings 2) For this scenario the callstack should be removed, to inform the developer that they should manually track the allocations in gdb or with some other mechanism. It would be really nice to know what c-callstacks (with parameters) trigger this tracemalloc stack. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue33565> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32448] subscriptable
Alexander Mohr <thehes...@gmail.com> added the comment: oh for second example I meant something like this: >>> class Foo: pass >>> Foo.organizer = None >>> Foo.blah = Foo >>> Foo.blah.organizer = None >>> Foo.blah.organizer[0] ya this is just a pie in the sky request :) -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32448> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32448] subscriptable
New submission from Alexander Mohr <thehes...@gmail.com>: Currently subscriting a attribute that's None reports the following: >>> class Foo: pass >>> Foo.organizer = None >>> Foo.organizer['start'] Traceback (most recent call last): File "", line 1, in TypeError: 'NoneType' object is not subscriptable What would be nice is if it were to report the name of the attribute that is not subscriptable as it would greatly help in the logs, something like: Traceback (most recent call last): File "", line 1, in TypeError: 'NoneType' object of attribute 'organizer' is not subscriptable just a thought. Otherwise one would need to sprinkle their code with asserts, especially if it's a compound statement like: Foo.organizer.blah[0], you wouldn't know which attribute wasn't None -- components: Interpreter Core messages: 309187 nosy: thehesiod priority: normal severity: normal status: open title: subscriptable type: enhancement versions: Python 3.7, Python 3.8 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32448> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue30698] asyncio sslproto do not shutdown ssl layer cleanly
Alexander Mohr <thehes...@gmail.com> added the comment: @grzgrzgrz3, does this resolve the issue in https://bugs.python.org/issue29406 ? I'm guessing you based this PR on that issue. If so I'd like it merged ASAP as otherwise our prod services will be incompatible with all future python releases given the original "fix" was reverted. Thanks! -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue30698> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29406] asyncio SSL contexts leak sockets after calling close with certain Apache servers
Alexander Mohr <thehes...@gmail.com> added the comment: my understanding is that the PR in https://bugs.python.org/issue30698 fixes this issue no? If so can we get it merged? -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue29406> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32754] feature request: asyncio.gather/wait cancel children on first exception
New submission from Alexander Mohr <thehes...@gmail.com>: currently gather/wait allow you to return on the first exception and leave the children executing. A very common use case that I have is of launching multiple tasks, and if any of them fail, then all should fail..otherwise the other tasks would continue running w/o anyone listening for the results. To accomplish this I wrote a method like the following: https://gist.github.com/thehesiod/524a1f005d0f3fb61a8952f272d8709e. I think it would be useful to many others as on optional perhaps a parameter to each of these methods. What do you guys think? -- components: asyncio messages: 311527 nosy: asvetlov, thehesiod, yselivanov priority: normal severity: normal status: open title: feature request: asyncio.gather/wait cancel children on first exception versions: Python 3.8 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32754> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29317] test_copyxattr_symlinks fails
Change by Alexander Mohr <thehes...@gmail.com>: -- nosy: +thehesiod versions: +Python 3.5 ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue29317> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32811] test_os.py fails when run in docker container on OSX host
Alexander Mohr <thehes...@gmail.com> added the comment: sorry if my report is confusing, the issue is when run in a debian:stretch docker container on an OSX host, so running this: docker run --rm -ti docker:stretch on osx. So if you have access to an OSX machine and have docker running (18.02.0-ce-mac53) on your OSX machine it should reproduce there. -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32811> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32811] test_os.py fails when run in docker container on OSX host
Alexander Mohr <thehes...@gmail.com> added the comment: here's how to repro, download fresh debian:stretch image, then install reqs for python: apt-get update && apt-get install curl build-essential libssl-dev libffi-dev libmemcached-dev zlib1g-dev install pyenv-installer: curl -L https://raw.githubusercontent.com/pyenv/pyenv-installer/master/bin/pyenv-installer | bash install and activate python3.6.3: pyenv install 3.6.3 && pyenv global 3.6.3 run attached test script which I generated from the unittest -- Added file: https://bugs.python.org/file47436/test.py ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32811> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32811] test_os.py fails when run in docker container on OSX host
Alexander Mohr <thehes...@gmail.com> added the comment: btw there are some other tests that fail too after removing that test like: test test_tokenize failed -- Traceback (most recent call last): File "/build/Python-3.6.3/Lib/test/test_tokenize.py", line 1557, in test_random_files testfiles.remove(os.path.join(tempdir, "test_%s.py") % f) ValueError: list.remove(x): x not in list -- ___ Python tracker <rep...@bugs.python.org> <https://bugs.python.org/issue32811> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com