On 23 January 2014 17:12, Guido van Rossum <[email protected]> wrote:
> loop.subprocess_xxx() will give give your protocol a callback when the
> process exits (which may be earlier or later than when the pipes are
> closed).
Sure. But my feeling is that waiting for the pipe to be closed is good
enough for most applications. You care more about stdin/stdout than you do
about the actual process, unless you are writing a daemon process
supervisor kind of thing (upstart/systemd).
It also manages connecting the pipes for you.
Well, it gives you one thing but expects another in return. It connects
the pipes, but expects a protocol factory. I don't want to write a
protocol factory just to be able to run a subprocess and capture its
output, thank you very much.
Also, this confuses me:
def subprocess_exec(self, protocol_factory, *args,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
**kwargs):
So you have a single protocol_factory, but potentially 3 pipes. Not only
you are forced to implement your own protocol, but you have to use the same
class to handle stdin, stdout, and stderr. I would expect to have 3
different protocols, one for each pipe.
But you don't *have* to use it.
>
Sure. I think it's easier for me to write my own asyncio-friendly
subprocess.Popen wrapper instead.
Sorry, I didn't mean to criticize these APIs this late in the release
process, but I hadn't noticed these methods and none of the tulip examples
use them, so they have stayed under my radar.
> On Thu, Jan 23, 2014 at 8:40 AM, Gustavo Carneiro <[email protected]>
> wrote:
> > It's funny that I've been using subprocesses without even being aware of
> > loop.subprocess_exec(). I just followed the child_process.py example in
> > Tulip, and it works fine. Why should we use loop.subprocess_xxx instead
> of
> > plain old subprocess.Popen followed by connecting the pipes to asyncio
> > streams?
> >
> >
> > On 23 January 2014 16:17, Phil Schaf <[email protected]> wrote:
> >>
> >> Am Donnerstag, 23. Januar 2014 16:48:46 UTC+1 schrieb Guido van Rossum:
> >>>
> >>> Read the source code of asyncio/streams.py. There are helper classes
> >>> that should let you do it. Please post the solution here.
> >>> --
> >>> --Guido van Rossum (python.org/~guido)
> >>
> >> i’m deep inside that source for some hours now, but since i never did
> >> multiple inheritance, only your comment convinced me that i can ideed
> marry
> >> SubprocessProtocol and a StreamReaderProtocol.
> >>
> >> import sys
> >> from functools import partial
> >> from asyncio.protocols import SubprocessProtocol
> >> from asyncio.streams import StreamReader, StreamReaderProtocol
> >>
> >> cmd = […]
> >>
> >> @coroutine
> >> def do_task(msg):
> >> loop = get_event_loop()
> >> reader = StreamReader(float('inf'), loop)
> >>
> >> transport, proto = yield from loop.subprocess_exec(
> >> partial(StdOutReaderProtocol, reader, loop=loop), *cmd)
> >>
> >> stdin = transport.get_pipe_transport(0)
> >> stdin.write(msg)
> >> stdin.write_eof() # which of those is actually necessary? only eof?
> >> only close?
> >> stdin.close()
> >>
> >> while True: # would be nice to do “for line in
> iter(reader.readline,
> >> b'')”, but not possible with coroutines
> >> line = yield from reader.readline()
> >> if not line:
> >> break
> >> do_something_with(line)
> >>
> >> class StdOutReaderProtocol(StreamReaderProtocol, SubprocessProtocol):
> >> def pipe_data_received(self, fd, data):
> >> if fd == 1:
> >> self.data_received(data)
> >> else:
> >> print('stderr from subprocess:', data.decode(),
> >> file=sys.stderr, end='')
> >>
> >> that was completely strange, though. imho there should be a easier way
> to
> >> do it instead of figuring this one out.
> >>
> >> thanks for your encouragement!
> >>
> >> – Phil
> >
> >
> >
> >
> > --
> > Gustavo J. A. M. Carneiro
> > Gambit Research LLC
> > "The universe is always one step beyond logic." -- Frank Herbert
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>
--
Gustavo J. A. M. Carneiro
Gambit Research LLC
"The universe is always one step beyond logic." -- Frank Herbert