wabu added the comment:
thanks for the fixes 'n' integration
--
___
Python tracker
<http://bugs.python.org/issue22685>
___
___
Python-bugs-list mailing
wabu added the comment:
On 21.10.2014 22:41, Guido van Rossum wrote:
> Guido van Rossum added the comment:
>
> Victor, do you think this needs a unittest? It seems kind of difficult to
> test for whether memory fills up (the machine may get wedged if it does :-).
You could
wabu added the comment:
thanks a lot, the fix works!
On 21.10.2014 22:16, Guido van Rossum wrote:
>
> Guido van Rossum added the comment:
>
> Can you confirm that this patch fixes the problem (without you needing the
> workaround in your own code)?
>
> --
>
wabu added the comment:
Here's a more complete example
@coroutine
put_data(filename, queue, chunksize=16000):
pbzip2 = yield from asyncio.create_subprocess_exec(
'pbzip2', '-cd', filename,
stdout=asyncio.subprocess.PIPE, limit=sel
wabu added the comment:
Sorry for the confusion, yes i do the yield from. The stdout stream for the
process is actually producing data as it should. The subprocess produces a high
amount of data (pbzip2), but is only consumed slowly.
Normally when the buffer limit is reached for a stream
New submission from wabu:
using `p = create_subprocess_exec(..., stdout=subprocess.PIPE, limit=...)`,
p.stdout has not transport set, so the underlying protocol is unable to pause
the reading of the transport, resulting in high memory usage when slowly
consuming input from p.stdout, even if