> Which in turn means that every filter, now blissfully unaware of ranges, > is forced to generate a full response for each byterange request. In the > case of a downloaded ISO (for example), this means a significant amount > of data (many hundreds of MB) is being processed by filters on each > request. > > Thus this discussion.
while I'm all for reducing server overhead (who isn't :) playing these kind of games with the filter API seems like such a bad idea. what we have now is a modular design that is simple and works - content handlers generate a response, while various filters adjust that response based on interesting criteria. requiring that one know anything about the other breaks that model and has the potential to get us into a hole from which it is difficult to escape. for an example, see a post I made about the (still present) problems with filter_init, which was an attempt to fix a problem that resulted from a similar attempt at short-circuiting existing logic: http://marc.theaimsgroup.com/?l=apache-httpd-dev&m=107090791508163&w=2 in my mind the filter API works best when everyone is blissfully ignorant of eachother so that the net result is that all requests are handled appropriately with a minimum of programmatic effort. sure, it would be nice to be as svelte as possible when serving large files, but the flip side to that is being svelt means that users of your API are more likely to get things wrong or encounter real-world situations you never thought about. just my $0.02. --Geoff