OK, I think I get it now. I've worked up a simpler version of the
patch, because I realized that the issue in _read_ready() is just that
we don't want the EIO exception to be logged. Your test still passes.
Let me know if your real code works with this version.

https://codereview.appspot.com/48350043/ (patch set 2)

raw download (for hg import):
https://codereview.appspot.com/download/issue48350043_20001.diff

On Tue, Jan 7, 2014 at 12:53 PM, Jonathan Slenders <[email protected]> wrote:
>
>
>
> 2014/1/7 Guido van Rossum <[email protected]>
>>
>> On Mon, Jan 6, 2014 at 10:57 PM, Jonathan Slenders
>> <[email protected]> wrote:
>> > It was the code I meant, not the test.
>> >
>> > But I just tested your revised version, and now the unit tests succeed
>> > here
>> > as well. Thanks.
>>
>> OK, but I'm not at all convinced that catching EIO and treating it the
>> same as EOF is correct. I don't even understand why I get that EIO.
>>
>> Could you show me some synchronous code (maybe using threads) showing
>> how pyts are expected to work?
>
>
> My guess is that we should usually close the master side first.
>
> I found at books.google.com, the linux programming interface, p1388:
> """
> If we close all file descriptors referring to the pseudoterminal master,
> then:
> - if the slave device has a controlling process, a SIGHUP signal is sent to
> that process.
> - a read() from the slave device returns end-of-of
> - a write to the slave device failes with error EIO (on some other UNIX
> implementations, write fails with the error ENXIO in this case.)
>
> If we close all file descriptors referring to the pseudoterminal slave,
> then:
> - a read() from the master device fails with error EIO (on some other UNIX
> implementations, a read() returns end-of-file in this case.
> """
>
> You are always going to attach an client application (e.g. an editor) on the
> slave side and have the terminal application (e.g. xterm) on the master
> side.
>
> By default, you'll have the pty in line buffered mode. This means that
> written characters on the master will be echo'ed back on the master (to be
> displayed). Only after enter has been pressed, becomes the whole line
> available to be read on the slave. This way, the pseudo terminal implements
> some line editing functionality itself. A character written on the slave is
> always immediately available on the master.
>
> In raw mode, every key press on the master is immediately send to the slave
> side. The application on the slave is in that case also responsible for
> displaying it, and should probably send some feedback by echoing back the
> received characters.
>
>>
>> > The master should be non blocking indeed. I my project i called
>> > ""io.open(self.master, 'wb', 0)""
>>
>> There seems to be confusion here -- the 0 means non-*buffering*, it
>> has no effect on *blocking*.
>>
> Sorry, you're right. I was confused.
>
>>
>> > Something related about blocking vs. non blocking. I don't know how
>> > these
>> > file descriptors work exactly.
>>
>> Then how can you write working code using them? (Not a rhetorical
>> question.)
>
>
> What I meant is that often I feel there a still missing parts I don't
> understand, but I know enough to build something useful.
>
>> > But I was now also able to use
>> > connect_read_pipe to read from stdin which was really nice for me.
>>
>> Hm, this is really falling quite far outside the intended use case for
>> asyncio. (Though well within that for the selectors module.)
>>
>> > (I really didn't like the executor solution that much.)
>>
>> I'm not aware of that solution -- is it in the docs somewhere?
>
>
> Oh, just an executor running a blocking read in a while loop:
>
> def in_executor():
>    while True:
>       c = stdin.read(1)
>       process_char(c)
>
>
>> > However, if you make stdin non
>> > blocking, stdout will automatically also become non blocking.
>>
>> Yeah, I see that too. I can't explain it; I suspect it's some obscure
>> implementation detail of tty (or pty) devices. :-(
>
>
> Maybe, like openpty(), stdin and stdout can actually be the same file
> descriptor underneath if they are attached to a pseudo terminal, even if
> they have different numbers. I learned that if you want to attach a child
> process to a newly created pseudo terminal, you do it by calling openpty,
> take the slave number, fork you own process, and in the fork use os.dup2 to
> copy that file descriptor to 0 and 1 (for stdout and stdin respectively.)
>
>
>> > But writing to
>> > non blocking stdout seems like a bad idea, (you get "write could not
>> > complete without blocking" errors everywhere.)
>>
>> So wrap a transport/protocol around stdin/stdout.That would seem to be
>> the asyncio way to do it. You should probably wrap those in a
>> StreamReader/StreamWriter pair -- the source code (streams.py) shows
>> how to create those by hand, which is an intended use.
>
>
> Thanks! That sounds really helpful.
>
>> > So what I do right now is to
>> > make stdout blocking again before writing, during a repaint of the app,
>> > and
>> > unblocking after writing. (Because this is all in the same thread as the
>> > event loop, it will be non blocking again when we get there.)
>>
>> Eew! :-(
>
> Yes, I know.
>
>>
>> > It works nice, but what would be nice was to have a _set_blocking method
>> > available in unix_events which is symmetrical.  (I why wouldn't we make
>> > them
>> > public?)
>>
>> By the time you are passing your own file descriptors you should be
>> mature enough to know how to make them non-blocking. Plus this is all
>> UNIX-specific... Plus, create_pipe_transport() already calls it for
>> you.
>>
>> Overall, perhaps you should just use the selectors module instead of
>> asyncio? You might be happier with that...
>>
>> --
>> --Guido van Rossum (python.org/~guido)
>
>



-- 
--Guido van Rossum (python.org/~guido)

Reply via email to