So I'm trying to monitor a shell process (think, 'tail -f some.log') inside 
a Python program with a running a asyncio event loop (all defaults).  If I 
write my own (complicated) "spawn_process()" method that uses pty.fork() I 
can do this:

loop = asyncio.get_event_loop()
fd = spawn_process(['/bin/sh', '-c', '/path/to/some/program'])
loop.add_reader(fd, print_output, fd) # print_output() does exactly that
loop.run_forever()

...and it works OK but it feels wrong since folks went to the trouble of 
creating asyncio.create_subprocess_shell().  Assuming I write a function 
that just spawns a subprocess and returns the resulting object:

@asyncio.coroutine
def subprocess_shell(cmd, **kwds):
    proc = yield from asyncio.create_subprocess_shell(cmd, **kwds)
    return proc

How do I get the fileno() of 'proc.stdout' so I can pass it to 
loop.add_reader()?  I can't seem to find it anywhere in the object.  Maybe 
I'm just not looking in the right place?  I just want it to call a callback 
when the stdout/stderr of the process has output.  You can't use 
loop.add_reader() without a file descriptor...  Or perhaps I'm missing 
something.  Is there a "better way" to handle this kind of thing?

If I did the same thing with subprocess.Popen() the resulting proc.stdout 
and proc.stderr will have fileno() methods that can be passed to 
loop.add_reader().  I assume there was a good reason why the same 
functionality was not added to asyncio.create_subprocess_shell()?

Reply via email to