I agree with Gustavo.

But: we should still have an option to use protocols when required. Maybe 
just keep connect_read/write_pipe like it is, but change the constructor of 
StreamReader and StreamWriter and decouple them.

stdin = StreamWriter(proc.stdout, loop=loop)

What I disliked and didn't understood about StreamReader and StreamWriter 
when handling the master side of a pseudo pty, was the way they were 
coupled. StreamReaderProtocol creates a StreamWriter instance. But in my 
case it didn't work because it were different pipes.




Le vendredi 24 janvier 2014 13:00:18 UTC+1, Gustavo Carneiro a écrit :
>
> On 24 January 2014 11:26, Victor Stinner <[email protected]<javascript:>
> > wrote:
>
>> Hi,
>>
>> I now understood that protocols should be avoided, they are low-level,
>> stream reader/writer should be preferred (high-level API). Ok, but
>> StreamReader and StreamWriter cannot be used with subprocess because
>> it's not possible to choose the protocols used for stdin, stdout and
>> stderr: WriteSubprocessPipeProto and ReadSubprocessPipeProto are used.
>>
>
> IMHO, loop.subprocess_exec should take up to 3 protocol factories, not 
> just one, for this precise reason.  As it is now, this API is really ugly 
> IMHO.
>
> I didn't want to sound negative, and I was just going to ignore this 
> issue.  But if you are going to redesign tulip subprocess APIs, then let me 
> chip in what it should look like, at high level:
>
>   1. Get rid of loop.subprocess_xxx() completely.  We don't need it (but 
> see 3.).  Instead use subprocess.Popen as normal.
>
>   2. Create utility functions streams.connect_read_pipe() and 
> streams.connect_write_pipe(), as found in examples/child_process.py.
>
>   3. Create a utility function (or loop method) that takes a Popen object 
> and returns a future that completes when the process has terminated (using 
> SIGCHLD and whatnot).
>
> I haven't considered whether the above design works in Windows or not.  I 
> just don't have enough Windows experience.  But I think it is much cleaner, 
> since you can use the subprocess.Popen API that we are all used to, and 
> just have special care if you need to redirect stdxxx.  Getting a 
> notification when a process is terminated is an orthogonal issue, you don't 
> need to put this into the subprocess function.
>
> So here's a hypothetical example:
>
> def program_exited(future):
>    print("program with pid {} exited with code {}".format(future.proc.pid, 
> future.exitcode))
>
> @asyncio.coroutine
> def doit(loop)
>     proc = subprocess.Popen("cat", stdout=subprocess.PIPE, 
> stdin=subprocess.PIPE)
>
>     stdin = asyncio.connect_write_pipe(proc.stdout, loop=loop) # returns a 
> StreamWriter
>     stdout = asyncio.connect_read_pipe(proc.stdout, loop=loop) # returns a 
> StreamReader
>
>     completed = loop.watch_subprocess(proc)
>     completed.add_done_callback(program_exited)
>
>     stdin.writelines([b"hello", b"world"])
>     while 1:
>          line = yield from stdout.readline()
>          if not line:
>             break
>          print(line)
>
> Notice that the programmer doesn't need to be aware of protocols or 
> transports.  If protocols and transports are a lower level layer, then we 
> need a higher level API that completely hides them.
>
> Just my 2 cents.
>
> -- 
> Gustavo J. A. M. Carneiro
> Gambit Research LLC
> "The universe is always one step beyond logic." -- Frank Herbert
>  

Reply via email to