Alexander Mohr added the comment:
Thanks so much for the patch!
may want to change spelling of what was supposed to be "shutdown" =) Also
think it's worth a comment stating why it's needed? Like certain Apache
servers were noticed to not complete the
Alexander Mohr added the comment:
I believe this is now worse due to https://github.com/python/asyncio/pull/452
before I was able to simply create a new run loop from sub-processes however
you will now get the error "Cannot run the event loop while another loop is
running". The st
Alexander Mohr added the comment:
Thanks for the feedback Nick! If I get a chance I'll see about refactoring my
gist into a base class and two sub-classes with the async supporting non-async
but not vice-versa. I think it will be cleaner. Sorry I didn't spend too much
effort on th
Alexander Mohr added the comment:
adding support for internal queue size is critical to avoid chewing through all
your memory when you have a LOT of tasks. I just hit this issue myself. If we
could have a simple parameter to set the max queue size this would help
tremendously
New submission from Alexander Mohr:
asyncio.selector_events.BaseSelectorEventLoop._sock_connect_cb is a callback
based on the selector for a socket. There are certain situations when the
selector triggers twice calling this callback twice, resulting in an
InvalidStateError when it sets the
Alexander Mohr added the comment:
Sorry for being obscure before, it was hard to pinpoint. I think I just
figured it out! I had code like this in a subprocess:
def worker():
while True:
obj = self.queue.get()
# do work with obj using asyncio http module
def producer
Alexander Mohr added the comment:
I'm going to close this as I've found a work-around, if I find a better
test-case I'll open a new bug.
--
resolution: -> later
status: open -> closed
___
Python tracker
<http://bu
Alexander Mohr added the comment:
Actually, I just realized I had fixed it locally by changing the callback to
the following:
429 def _sock_connect_cb(self, fut, sock, address):
430 if fut.cancelled() or fut.done():
431 return
so a fix is still needed, and I also
Alexander Mohr added the comment:
clarification, adding the fut.done() check, or monkey patching:
orig_sock_connect_cb =
asyncio.selector_events.BaseSelectorEventLoop._sock_connect_cb
def _sock_connect_cb(self, fut, sock, address):
if fut.done(): return
return orig_sock_connect_cb(self
Alexander Mohr added the comment:
self.queue is not an async queue, as I stated above its a multiprocessing
queue. This code is to multiplex a multiprocessing queue to a async queue.
--
___
Python tracker
<http://bugs.python.org/issue25
Alexander Mohr added the comment:
Perhaps I'm doing something really stupid, but I was able to reproduce the two
issues I'm having with the following sample script. If you leave the monkey
patch disabled, you get the InvalidStateError, if you enable it, you get the
ServerDisconn
Alexander Mohr added the comment:
attaching my simplified testcase and logged an aiohttp bug:
https://github.com/KeepSafe/aiohttp/issues/633
--
Added file: http://bugs.python.org/file41018/test_app.py
___
Python tracker
<http://bugs.python.
Alexander Mohr added the comment:
btw want to thank you guys for actively looking into this, I'm very grateful!
--
___
Python tracker
<http://bugs.python.org/is
Alexander Mohr added the comment:
I'm not sure if you guys are still listening on this closed bug but I think
I've found another issue ;) I'm using python 3.5.1 + asyncio 3.4.3 with the
latest aiobotocore (which uses aiohttp 0.21.0) and had two sessions (two
TCPConnector
Alexander Mohr added the comment:
update: its unrelated to the number of sessions or SSL, but instead to the
number of concurrent aiohttp requests. When set to 500, I get the error, when
set to 100 I do not.
--
___
Python tracker
<h
Alexander Mohr added the comment:
sorry for disruption! ends up our router seems to be doing some kind of QoS
limits on # of connections :(
--
___
Python tracker
<http://bugs.python.org/issue25
Alexander Mohr added the comment:
any chance if this getting into 3.5.2? I have some gross code to get around it
(setting global properties)
--
nosy: +thehesiod
___
Python tracker
<http://bugs.python.org/issue21
Alexander Mohr added the comment:
any updates on this? I think this would be perfect for
https://github.com/aio-libs/aiobotocore/issues/31
--
nosy: +thehesiod
___
Python tracker
<http://bugs.python.org/issue23
New submission from Alexander Mohr:
I have a unittest which spawns several processes repeatedly. One of these
subprocesses uses botocore, which itself uses the above two methods through the
calls proxy_bypass and getproxies. It seems after re-spawning the methods a
few times the titled
Alexander Mohr added the comment:
interestingly I haven't been able to get this to crash in a separate test app.
There must be either timing related to some interaction with another module.
let me know how you guys would like to proceed. I can definitely reproduce it
consistently i
Alexander Mohr added the comment:
ya I did a monkey patch which resolved it.
if sys.platform == 'darwin':
import botocore.vendored.requests.utils, urllib.request
botocore.vendored.requests.utils.proxy_bypass =
urllib.request.proxy_bypass_e
Alexander Mohr added the comment:
I'm sure it would work, I just wanted a solution that didn't changes to our
build infrastructure. btw if we're marking this as a duplicate of the other
bug, can we update the other bug to say it affects python3.x as well? Thanks!
--
101 - 122 of 122 matches
Mail list logo