William A. Rowe, Jr. wrote:

I don't like where this conversation is heading at all.  You are suggesting that
every filter needs to become progressively more aware of the http module
characteristics, but that's what we were moving away from in apache 2.0.

Ok, this is exactly how Geoffrey Young understood it, but as you pointed out mod_proxy (and mod_jk etc) are protocol handlers, not content handlers.

I don't see how mod_proxy can not be a protocol handler and still confirm to the behaviour prescribed for how proxies should behave in RFC2616?

Body/content generation or transformation should not be contending with
these issues you raised above.  It's not unreasonable to expect some
metadata to pass through or be transformed (such as a content length,
which some filter can tweak or discard altogether.)  But it is getting very
obscure to expect them to contend with byteranges.  What's next?

Not obscure at all - byteranges are used by download accerators (evil things as they are) and people doing download resume. Supporting them is a big performance win for any webserver that serves large files, which as the net gets faster, is going to become more of a problem.

That's why I proposed a skip-forward semantic to support byte ranges.
It's far abstracted from http, is an optional feature (skip if you can, or
read if your filter must in order to transform) and trivial to implement.
And it's typical of bytestream APIs.

The proxy solution is simpler, determine if end to end you have either
http <> proxy, or if the intervening filters are all 1:1 stateless transformations.
If they can't negotiate a protocol level pipe (because there are non-stateless
content filters in the way), then it's up to http and proxy to stay out of the
way, and make it possible for content filters to filter content.

proxy is a content handler though - it cannot stay out of the way, what would take it's place?

So far it seems that we're expecting proxy to

a) receive an HTTP response from a backend server.
b) parse the response, and encode metadata like range and content length into a filter specific metadata along with the stack.
c) pass the content up the stack.
d) get the filter stack to convert the metadata back into HTTP again.

And apart from Content-Length and Range, what about other metadata from proxy like Date, or Server, or Etag? At some point the "metadata" becomes the "headers array", and now we're back to just a stack that parses HTTP.

I can see a lot of virtue in keeping the modules simple, but then I'm zooming out a bit, and it seems that we're undoing a lot of work so that the work can be redone again by the output filters.

Regards,
Graham
--

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature



Reply via email to