Brian Degenhardt wrote:
This is a huge pain in my neck. We stream/deliver mp3s through mod_proxy
and if you've got some guy on a 14.4 dialup downloading a 10meg mp3, the
backend server gets a connection tied up the whole time.
not to mention the huge potential for DoS
(wouldn't even need proxy .. a simple CGI would do)
It's tricky though. For cgi accellerators, you can rely on your tcp buffer
size to pull in the whole document. However, you can't buffer 10 megs in
ram because you'd run out of memory with just a handfull of streams.
What's needed is some sort of disk based buffering mechanism.
so we don't block on the client send?
Would something like this be useful for the distribution? I've been playing
with mods like this on the side, if you guys want I might be able offer a
patch to accomplish this via a config directive.
I'd be interested in this.
it would stop us having to disable streaming
..Ian
On Wed, Aug 29, 2001 at 05:58:38PM -0400, Chuck Murcko wrote:
On Wednesday, August 29, 2001, at 12:29 PM, Ian Holsman wrote:
one of our developers over here came up with a interesting question.
say we have a GET request which gets served from a CGI or proxy. and it
is streamed/chunked out (which is the desired effect)
now lets say I have a VERY SLOW connection.
does this cause the server to hold onto the backend connection (ie..
keep the CGI/proxy alive) while the server is waiting for a response
back from the client.
Yes, this is exactly what happens when you (mod_)proxy iTunes under Mac
OS X. If the client connection is slow enough, it fails and rebuffers
the stream periodically. But the backend connection stays open.
Chuck Murcko
Topsail Group
http://www.topsail.org/