Hello,

following the discussion from 
https://groups.google.com/forum/?fromgroups#!topic/python-tulip/iGPv24gTpAI, 
I've been working on a small library for async access to files through a 
thread pool. I've been aiming to emulate the existing file API as much as 
possible:

f = yield from aiofiles.open('test.bin', mode='rb')
try:
    data = yield from f.read(512)
finally:
    yield from f.close()

I've run into two difficulties - first, it's difficult for me to tell which 
calls may actually block (does 'isatty' block? does 'seekable' block [I 
think so]?) and which don't have to go through an executor. But this is a 
question for another day. :)

The second is that certain nifty file operations can't really be ported to 
the async world; for example context managers. A file close may block, I 
believe, so __exit__ would need to be yielded from, and that's currently 
impossible, right?

Also, iterating over the file is also presenting me with difficulties. 
There's no way for __next__ to be a coroutine, right? So __next__ would 
have to return futures. But how to know when to raise StopIteration without 
actually doing IO? Also, all the futures would basically be the same - 
calling readline() in an executor. So if a used accidentally (or on purpose 
maybe) doesn't actually yield from the futures right away, the iteration 
would spin infinitely.

I'm thinking implementing something like this isn't worth the trouble, and 
users should just be instructed to use a while loop and readline() until an 
empty result comes back. I'd appreciate comments to my conclusions, from 
the experts. :)

I will say one thing, I've learned a lot about Python 3's file IO stack :)

Reply via email to