To be fair he does say "Not HTTP/1.1 capable (yet)." Is it the pipelining that you'd miss?

That was added after a pull request I sent.  ;)

HTTP/1.1 is far more than pipelining, though that does add signifigantly to performance. Chunked encoding allows for far more streamlined responses and true async responses. You don't have to define content-length, so fully dynamic content is easier to produce. (HTTP/1.0 does has pipelining using the Connection: keep-alive header, but it -required- perfectly aligned content-lengths.)

My application consists of a modified clone of web.py sans the bloat. I never thought I'd say such a thing of the anti-framework framework. Full disclosure: I've implemented my own `web.application`, borrowed heavily from `web.utils`/`web.http`, reimplemented most of `web.webapi` keeping with the style, and carried over the holy `web.template`.

I, too, forked web.py and removed signifigant portions of it:

        https://github.com/GothAlice/webpy

After determining that cleaning up WebPy wasn't in the cards for me, I wrote my own WSGI middleware-based microframework "from scratch" (WebCore).

Currently a middleware sanitizes the `environ` into a nested storage of request and response data parsing `accept*` headers, parsing `user-agent` against `browscap` to determine browser capabilities, geolocating according to `REMOTE_ADDR`, and checking if AJAX request using `X-Requested-With`. No cookie/session handling has been reintroduced yet.

The 'draft' branch of marrow.server.http performs unicode normalization and normaization to native strings (Py2K byte str, Py3K unicode str), moving some of the more mundane work out of the middleware layer.

2. If it is requested by a non-javascript-capable user-agent it precompiles the output from the four page handlers internally and flushes a complete page in one shot. jQuery is obviously /not/ fetched (in `ab` or `lynx`).

I'd recommend flushing the parts as they are compiled; Google does this when returning search results: static header (filled in with current search terms, easy to generate), search results, ads, then footer. It makes the page 'apparently' fill faster.

All `web.template`s are *precompiled* at application boot. Each of four content handler are produced via templates. The generator yields 4 templates. That is 7 total. The landing utilizes these according to the rules above. The entire product is wrapped in a base template just before response. 9 template calls in total.

You might like to check out simplithe[1], a pure-Python templating system (that uses Python itself as the interpreter, overloading [] notation) and is easy to return as a genetartive WSGI body (including flush semantics).

"BJOERN IS SCREAMINGLY FAST AND ULTRA-LIGHTWEIGHT." He's right.

bjoern is up to 35 times faster than marrow and gevent w/ py-wsgi and
1.5 times faster than gevent w/ c-wsgi. c truly is a requirement for
any kind of real speed.

That is, in fact, very impressive. Thank you for including Marrow. :) What exactly did you mean by 4 marrow workers? Multi-processing? Multi-processing on a single-core machine may impede performance, not improve it. I'll also be adding multi-threading (which will be crap performance under anything but Python 3.2+, but hey, it makes it more complete).

Additionally, how much effort was it for you to make your application compatible with the semantics of WSGI 2? (No start_response callable and no writer returned by same.)

I'm mentioning this because in my tests m.s.http handles C10K, is able to process
10Krsecs at C6K or so, is fully unit tested, and has complete documentation.

Which `ab` arguments are you using for "C10K"?

Here are the terminal windows:  :)

        https://gist.github.com/707936

The box is a 512MB (RAM) slice instance on SliceHost and is actively running a MUSH (very inefficient use of CPU; it's constantly inducing a small amount of load even with no active connections) and three WebCore web apps.

At smaller concurrency (~6K) Marrow is able to process > 10K requests/second.

I'm not yet intimate with the details of WSGI so I've learned a great
deal already just by hearing that a PEP 444 exists. To clarify, PEP
3333 is `WSGI 1.1` and PEP 444 is `web3`, correct? Would it be
accurate to say that your server supports `web3` rather than `WSGI 2`?

After some discussion on the Web-SIG mailing list, PEP 444 is now "officially" WSGI 2, and PEP 3333 is WSGI 1.1.

If anyone is interested in future benchmarks as I continue to
reintroduce complexity I'd be willing to repeat the above
periodically. And last but not least, the relevance of this post to
web.py is that my project will use web.py apps as "extensions" atop a
decentralized social framework. Think of it as retaining the
high-level features of web.py while constraining and optimizing the
lower-level features (in no small part by marrying the framework to
the most optimal server implementation).

Sounds thuroughly impressive thus far! I can safely assume this is a closed-source project?

Happy new-year!

        - Alice.

[1] http://bit.ly/fxcFzG


--
You received this message because you are subscribed to the Google Groups 
"web.py" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/webpy?hl=en.

Reply via email to