On 2011-01-06 10:15:19 -0800, Antoine Pitrou said:

Alice Bevan–McGregor <al...@...> writes:
Er, for the record, in Python 3 non-blocking file objects return None when
read() would block.

-1

I'm aware, however that's not practically useful.  How would you detect
from within the WSGI 2 application that the file object has become
readable?  Implement your own async reactor / select / epoll loop?
That's crazy talk!  ;)

I was just pointing out that if you need to choose a convention for signaling blocking reads on a non-blocking object, it's already there.

I don't. I need a way to suspend execution of a WSGI application pending some operation, often waiting for socket or file read or write availability. (Just as often something entirely unrelated to file descriptors, see my previous post from a few moments ago.)

By the way, an event loop is the canonical implementation of asynchronous programming, so I'm not sure what you're complaining about. Or perhaps you're using "async" in a different meaning? (which one?)

If you use non-blocking sockets, and the WSGI server provides a way to directly access the client socket (ack!), utilizing the none response on reads would require you to utilize a tight loop within your application to wait for actual data. That's really, really bad, and in a single-threaded server, deadly.

I don't understand why you want a "yield" at this level. IMHO, WSGI needn't involve generators. A higher-level wrapper (framework, middleware, whatever) can wrap fd-waiting in fancy generator stuff if so desired. Or, in some other environments, delegate it to a reactor with callbacks and deferreds. Or whatever else, such as futures.

WSGI already involves generators: the response body. In fact, the templating engine I wrote (and extended to support flush semantics) utilizes a generator to return the response body. Works like a hot damn, too.

Yield is the Python language's native way to suspend execution of a callable in a re-entrant way. A trivial example of this is an async "ping-pong" reactor. I wrote one ("you aren't a real Python programmer unless...") as an experiment and utilize it for server monitoring with tasks being generally scheduled against time, vs. edge-triggered or level-triggered fd operation availability.

Everyone has their own idea of what a "deferred" is, and there is only one definition of a "future", which (in a broad sense) is the same as the general idea of a "deferred". Deferreds just happen to be implementation-specific and often require rewriting large portions of external libraries to make them compatible with that specific deferred implementation. That's not a good thing.

Hell; an extension to the futures spec to handle file descriptor events might not be a half-bad idea. :/

By the way, the concurrent.futures module is new. Though it will be there in 3.2, it's not guaranteed that its API and semantics will be 100% stable while people start to really flesh it out.

Ratification of PEP 444 is a long way off itself. Also, Alex Grönholm maintains a pypi backport of the futures module compatible with 2.x+ (not sure of the specific minimum version) and < 3.2. I'm fairly certain deprecation warnings wouldn't kill the usefulness of that implementation. Worrying about instability, at this point, may be premature.

+1 for pure futures which (in theory) eliminate the need for dedicated async versions of absolutely everything at the possible cost of slightly higher overhead.

I don't understand why futures would solve the need for a low-level async facility.

You mis-interpreted; I didn't mean to infer that futures would replace an async core reactor, just that long-running external library calls could be trivially deferred using futures.

You still need to define a way for the server and the app to wake each other (and for the server to wake multiple apps).

Futures is a pretty convienent way to have a server wake an app; using a future completion callback wrapped (using partial) with the paused application generator would do it. (The reactor Marrow uses, a modified Tornado IOLoop, would require calling reactor.add_callback(partial(worker, app_gen)) followed by reactor._wake() in the future callback.)

"Waking up the server" would be accomplished by yielding a futures instance (or fd magical value, etc).

This isn't done "naturally" in Python (except perhaps with stackless or greenlets). Using fds give you well-known flexible possibilities.

Yield is the natural way for one side of that, re-entering the generator on future completion covers the other side. Stackless and greenlets are alternate ideas, but yield is built-in (and soon, so will futures).

If you want to put the futures API in WSGI, think of the poor authors of a WSGI server written in C who will have to write their own executor and future implementation. I'm sure they have better things to do.

If they embed a Python interpreter via C, they can utilize native implementations of future executors, though these will obviously be slightly less performant than a native C implementation. (That is, unless the stdlib version in 3.2 will have C backing.)

        - Alice.


_______________________________________________
Web-SIG mailing list
Web-SIG@python.org
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: 
http://mail.python.org/mailman/options/web-sig/archive%40mail-archive.com

Reply via email to