2014-01-29 Guido van Rossum <[email protected]>:
> - Ideally the APIs would take the same arguments as subprocess_shell/exec,
> except for the protocol factory. The return value may be different.

The current API in BaseEventLoop (subprocess_exec/subprocess_shell)
solves an old and common issue of subprocess API: shell=True expects a
string, whereas shell=False expects a list of strings. But...
subprocess.Popen also accepts a list with shell=True which leads to
strange surprising behaviour: subprocess.call(['echo', 'Hello
World!'], shell=True) does not display anything, because 'Hello
World!' parameter is *not* passed to the echo command. In fact, it is
similar to:

$ bash -c 'echo "Hello World!"'
Hello World!
$ bash -c 'echo' 'Hello World!'
<nothing>

(shell=False doesn't accept string, it always require a list, which is better)

See for example issues:
http://bugs.python.org/issue7839
http://bugs.python.org/issue13197

Well, asyncio can maybe uses an unified create_subprocess() API with a
shell parameter, but implement #7839: raise an error for
create_subprocess(list, shell=True).

> - We need access to subprocess features like its pid, its exit status (once
> it's exited), and we want to be able to send it signals and wait for its
> completion.

Somewhere, we need to draw a line between subprocess and asyncio:
subprocess is designed for polling, it has for example a poll()
method, whereas asyncio is more event driven.

> - We want to support separate streams wrapping the pipes for stdin, stdout,
> and stderr, but we may not need all three, and it's probably asking for
> trouble to return all three by default, since it's easy to get a deadlock if
> the same task alternates between reading from stdout and writing to stdin
> (however, if you use separate tasks it's a  manageable problem).

I'm in favor of not creating any pipe by default. I prefer an explicit
stdout=subprocess.PIPE. By the way, Python 3.3 has now 3 kind of
pipes: PIPE, STDOUT but also DEVNULL.

By the way, my proposition doesn't care of close_fds or pass_fds.
Should we create a reader and/or writer to each FD shared with the
child process? Or at least provide an API to create such stream?
Currently, pipe_data_received() drops data from unknown pipes.

> - We'd like the API to either resemble the event loop subprocess API (which
> we're not changing), or the existing non-Tulip subprocess API (the
> subprocess module, or possibly the various subprocess management functions
> in the os module), rather than designing yet another API.

I would prefer to have an API closer to the existing subprocess
module, since the submodule is present and used since more than 5
years. A similar API makes the transition from subprocess to asyncio
easier.

"Similar" doesn't mean identical. For example, in my patch, wait()
doesn't take a timeout: you have to use asyncio.wait_for(proc.wait(),
timeout).

Victor

Reply via email to