Geoffrey Young wrote:

while I'm all for reducing server overhead (who isn't :) playing these kind
of games with the filter API seems like such a bad idea.  what we have now
is a modular design that is simple and works - content handlers generate a
response, while various filters adjust that response based on interesting
criteria.  requiring that one know anything about the other breaks that
model and has the potential to get us into a hole from which it is difficult
to escape.

Which is a point I made in part of the post that you didn't quote above.

To try and make my point again more clearly:

Content is generated in compliance with the HTTP/1.1 specification. This HTTP/1.1 compliant content is then fed through several filters, which potentially alter the data in compliance with the HTTP/1.1 specification. Eventually the filtered content is sent out over the network in compliance with the HTTP/1.1 specification.

If the byte range filter is not capable of receiving and intelligently handling a 206 Partial Content from a content handler, then the byte range filter is not compliant with HTTP/1.1, and is therefore broken.

If any other filter is not capable of processing data that has come from a 206 Partial Content response, AND that filter does not either a) remove itself from the filter stack, or b) remove the Range header from the request, then that filter is not compliant with the HTTP/1.1 specification, and is therefore broken.

Up until now it has been simplistically assumed that ALL content handlers will only ever generate full responses, and so filters and certain content handlers have ignored the Range part of RFC2616. With the existance of mod_proxy, mod_jk, mod_backhand (etc) taking this shortcut does not make sense.

Nowhere in the above is any requirement laid down that one module must depend on another one. The only requirement is that content handlers and filters must behave in a way that is compliant with the HTTP/1.1 specification.

Regards,
Graham
--

Reply via email to