Hi,

Nice project.

> def sendfile(out, in_fd, offset, nbytes, loop=None, executor=None):

IMO you must declare loop and executor as keyword-only parameter to avoid bugs.

Victor

2015-04-01 22:33 GMT+02:00 Tin Tvrtković <[email protected]>:
> Hello,
>
> here's a small library I've put together for reading disk files using a
> thread pool: https://github.com/Tinche/aiofiles (it's on PyPI too).
>
> There's not a lot of documentation but I hope it's very simple to use.
> Mostly it's just sticking 'yield from' in front of the existing API.
>
> I've put together some useful pytest fixtures for testing asyncio
> applications. These have been heavily influenced (i.e. I basically stole
> them) by the wonderful pytest-tornado package. I'm contemplating refactoring
> them out and publishing them as pytest-asyncio.
>
> I've thought about supporting process pools in addition to thread pools, but
> this is over my head for the time being, and full of caveats. Technically,
> concurrent.futures.ProcessPoolExecutor on non-Windows uses pipes, and open
> file descriptors can be transferred through pipes, somehow. This would only
> be useful for non-buffered files, though. Also, a major reason to use
> processes is, I suppose, the ability to time out a call and cleanly
> terminate a stuck process, and I think c.f.PPE doesn't support that cleanly.
> In other words, complications...
>
> Another idea for improvement is handing buffered files intelligently. If
> you're trying to read some data from a buffered file, and there is enough
> data in the buffer, the trip through the executor could be avoided. I
> haven't looked closely into this.
>
> I don't claim to be an expert at asyncio or I/O in general, so constructive
> comments are indeed welcome.

Reply via email to