New submission from Alexander Mohr:
I have a unittest which spawns several processes repeatedly. One of these
subprocesses uses botocore, which itself uses the above two methods through the
calls proxy_bypass and getproxies. It seems after re-spawning the methods a
few times the titled
Alexander Mohr added the comment:
any updates on this? I think this would be perfect for
https://github.com/aio-libs/aiobotocore/issues/31
--
nosy: +thehesiod
___
Python tracker
<http://bugs.python.org/issue23
Alexander Mohr added the comment:
any chance if this getting into 3.5.2? I have some gross code to get around it
(setting global properties)
--
nosy: +thehesiod
___
Python tracker
<http://bugs.python.org/issue21
Alexander Mohr added the comment:
sorry for disruption! ends up our router seems to be doing some kind of QoS
limits on # of connections :(
--
___
Python tracker
<http://bugs.python.org/issue25
Alexander Mohr added the comment:
update: its unrelated to the number of sessions or SSL, but instead to the
number of concurrent aiohttp requests. When set to 500, I get the error, when
set to 100 I do not.
--
___
Python tracker
<h
Alexander Mohr added the comment:
I'm not sure if you guys are still listening on this closed bug but I think
I've found another issue ;) I'm using python 3.5.1 + asyncio 3.4.3 with the
latest aiobotocore (which uses aiohttp 0.21.0) and had two sessions (two
TCPConnector
Alexander Mohr added the comment:
btw want to thank you guys for actively looking into this, I'm very grateful!
--
___
Python tracker
<http://bugs.python.org/is
Alexander Mohr added the comment:
attaching my simplified testcase and logged an aiohttp bug:
https://github.com/KeepSafe/aiohttp/issues/633
--
Added file: http://bugs.python.org/file41018/test_app.py
___
Python tracker
<http://bugs.python.
Alexander Mohr added the comment:
Perhaps I'm doing something really stupid, but I was able to reproduce the two
issues I'm having with the following sample script. If you leave the monkey
patch disabled, you get the InvalidStateError, if you enable it, you get the
ServerDisconn
Alexander Mohr added the comment:
self.queue is not an async queue, as I stated above its a multiprocessing
queue. This code is to multiplex a multiprocessing queue to a async queue.
--
___
Python tracker
<http://bugs.python.org/issue25
Alexander Mohr added the comment:
clarification, adding the fut.done() check, or monkey patching:
orig_sock_connect_cb =
asyncio.selector_events.BaseSelectorEventLoop._sock_connect_cb
def _sock_connect_cb(self, fut, sock, address):
if fut.done(): return
return orig_sock_connect_cb(self
Alexander Mohr added the comment:
Actually, I just realized I had fixed it locally by changing the callback to
the following:
429 def _sock_connect_cb(self, fut, sock, address):
430 if fut.cancelled() or fut.done():
431 return
so a fix is still needed, and I also
Alexander Mohr added the comment:
I'm going to close this as I've found a work-around, if I find a better
test-case I'll open a new bug.
--
resolution: -> later
status: open -> closed
___
Python tracker
<http://bu
Alexander Mohr added the comment:
Sorry for being obscure before, it was hard to pinpoint. I think I just
figured it out! I had code like this in a subprocess:
def worker():
while True:
obj = self.queue.get()
# do work with obj using asyncio http module
def producer
New submission from Alexander Mohr:
asyncio.selector_events.BaseSelectorEventLoop._sock_connect_cb is a callback
based on the selector for a socket. There are certain situations when the
selector triggers twice calling this callback twice, resulting in an
InvalidStateError when it sets the
Alexander Mohr added the comment:
adding support for internal queue size is critical to avoid chewing through all
your memory when you have a LOT of tasks. I just hit this issue myself. If we
could have a simple parameter to set the max queue size this would help
tremendously
Alexander Mohr added the comment:
btw, I believe the solution is as simple as stated as that's what I'm doing
locally and its behaving exactly as intended.
--
___
Python tracker
<http://bugs.python.o
Alexander Mohr added the comment:
Ya. The original request I think is ok because by allowing that flag it
will replace files and dirs.
On Mar 14, 2014 7:15 PM, "R. David Murray" wrote:
>
> R. David Murray added the comment:
>
> I don't know what "the method a
Alexander Mohr added the comment:
I personally dont think this is worth investing the time for a discussion.
If the maintainers dont want to accept this or a minor variation without a
discussion ill just keep my local monkeypatch :) thanks again for the
quick patch Elias!
On Mar 8, 2014 4:03 PM
Alexander Mohr added the comment:
how about instead we rename the new parameter to dirs_exists_ok or something
like that since the method already allows for existing files.
--
___
Python tracker
<http://bugs.python.org/issue20
Alexander Mohr added the comment:
awesome, thanks so much!!
--
___
Python tracker
<http://bugs.python.org/issue20849>
___
___
Python-bugs-list mailing list
Unsub
New submission from Alexander Mohr:
it would be REALLY nice (and REALLY easy) to add a parameter: exist_ok and pass
this to os.makedirs with the same parameter name so you can use copytree to
append a src dir to an existing dst dir.
--
components: Library (Lib)
messages: 212691
nosy
101 - 122 of 122 matches
Mail list logo