On Fri, Jan 31, 2014 at 5:02 PM, Victor Stinner <[email protected]>wrote:
> I fixed the remaining FIXME in proactor. There is no more known issue > in my subprocesss_stream branch. New change set: > https://codereview.appspot.com/52320045/#ps80001 > > In fact, changes of proactor_events.py can be applied in a separated > commit (before adding subprocess.py). It fixes the pause/resume > feature of the protocol. The protocol is now paused if more than 64 KB > "will be written", where 64 KB is the total of the buffer *and* the > size of the current pending write. > Sounds good, merge that into the tulip default branch and commit it to the cpython repo (as well as the few other tweaks I see you have). > I tested pass_fds. It looks to be possible to connect more pipes (read > or write, and different than stdin, stdout, stderr) after the creation > of the process. > This is great -- can you add it to the docs too? > Example to connect the write end of a pipe: > --- > import asyncio > import os, sys > from asyncio.subprocess import * > > code = """ > import os, sys > fd = int(sys.argv[1]) > data = os.read(fd, 1024) > sys.stdout.buffer.write(data) > """ > > loop = asyncio.get_event_loop() > > @asyncio.coroutine > def task(): > rfd, wfd = os.pipe() > args = [sys.executable, '-c', code, str(rfd)] > proc = yield from create_subprocess_exec(*args, pass_fds={rfd}, > stdout=PIPE) > > pipe = open(wfd, 'wb', 0) > transport, protocol = yield from > loop.connect_write_pipe(asyncio.Protocol, pipe) > transport.write(b'data') > > stdout, stderr = yield from proc.communicate() > print("stdout = %r" % stdout.decode()) > > loop.run_until_complete(task()) > --- > > > Example to connect the read end of a pipe: > --- > import asyncio > import os, sys > from asyncio.subprocess import * > > code = """ > import os, sys > fd = int(sys.argv[1]) > data = os.write(fd, b'data') > os.close(fd) > """ > > loop = asyncio.get_event_loop() > > @asyncio.coroutine > def task(): > rfd, wfd = os.pipe() > args = [sys.executable, '-c', code, str(wfd)] > > pipe = open(rfd, 'rb', 0) > reader = asyncio.StreamReader(loop=loop) > protocol = asyncio.StreamReaderProtocol(reader, loop=loop) > transport, _ = yield from loop.connect_read_pipe(lambda: protocol, > pipe) > > proc = yield from create_subprocess_exec(*args, pass_fds={wfd}) > yield from proc.wait() > > os.close(wfd) > data = yield from reader.read() > print("read = %r" % data.decode()) > > loop.run_until_complete(task()) > --- > > If the "os.close(wfd)" line is removed, reader.read() blocks forever. > Is it a bug? > No, that's expected -- the read end of a pipe won't return EOF until the last fd referenced the write end *anywhere* is closed. That includes the parent process. Might be worth mentioning in the docs though. :-) > "from asyncio.subprocess import *" is "safe": it imports > create_subprocess_exec/shell functions and PIPE, STDOUT and DEVNULL > constants. > I left some comments related to this in the tulip tracker issue 115. -- --Guido van Rossum (python.org/~guido)
