The reason the reader/writer don't expose their FD is that there's already
a handler registered for them, so registering your own would have
disappointing effects.
TBH I don't see anything wrong with your first version -- if you don't need
the full treatment you shouldn't have to pay for it. :-)
But if you want to use asyncio's create_subprocess_shell() you can just
write a coroutine that reads from the reader and prints its output, and run
that reader:
@asyncio.coroutine
def tail(reader):
while True:
data = yield from reader.read(8192)
if not data:
break
# here you print data (note it's a bytes object)
@asyncio.coroutine
def setup():
reader, writer = yield from asyncio.create_subprocess_shell(......)
yield from tail(reader)
asyncio.get_event_loop().run_until_complete(setup())
There's also a lower-level subprocess API on the event loop; it lets you
write a Protocol subclass; in that class you can write a data_received()
method that prints your data:
class TailProtocol:
def data_received(self, data):
# here you print data (note it's a bytes object)
@asyncio.coroutine
def setup():
transport, protocol = yield from loop.subprocess_shell(TailProtocol,
program, *args)
The termination condition in this case is a little complex, you probably
want to create a Future in TailProtocol.__init__() that is made complete by
the connection_lost() callback:
class TailProtocol:
def __init__(self):
self.complete = asyncio.Future()
def data_received(self, data):
# etc.
def connection_lost(self, err):
self.complete.set_result(err)
@asyncio.coroutine
def setup():
transport, protocol = yield from loop.subprocess_shell(TailProtocol,
program, *args)
yield from protocol.complete
asyncio.get_event_loop().run_until_complete(setup())
Hope this isn't too overwhelming for you. :-)
On Sat, May 17, 2014 at 2:06 PM, Dan McDougall <[email protected]> wrote:
> So I'm trying to monitor a shell process (think, 'tail -f some.log')
> inside a Python program with a running a asyncio event loop (all defaults).
> If I write my own (complicated) "spawn_process()" method that uses
> pty.fork() I can do this:
>
> loop = asyncio.get_event_loop()
> fd = spawn_process(['/bin/sh', '-c', '/path/to/some/program'])
> loop.add_reader(fd, print_output, fd) # print_output() does exactly that
> loop.run_forever()
>
> ...and it works OK but it feels wrong since folks went to the trouble of
> creating asyncio.create_subprocess_shell(). Assuming I write a function
> that just spawns a subprocess and returns the resulting object:
>
> @asyncio.coroutine
> def subprocess_shell(cmd, **kwds):
> proc = yield from asyncio.create_subprocess_shell(cmd, **kwds)
> return proc
>
> How do I get the fileno() of 'proc.stdout' so I can pass it to
> loop.add_reader()? I can't seem to find it anywhere in the object. Maybe
> I'm just not looking in the right place? I just want it to call a callback
> when the stdout/stderr of the process has output. You can't use
> loop.add_reader() without a file descriptor... Or perhaps I'm missing
> something. Is there a "better way" to handle this kind of thing?
>
> If I did the same thing with subprocess.Popen() the resulting proc.stdout
> and proc.stderr will have fileno() methods that can be passed to
> loop.add_reader(). I assume there was a good reason why the same
> functionality was not added to asyncio.create_subprocess_shell()?
>
--
--Guido van Rossum (python.org/~guido)