Neil Gunton wrote:

Is this really such a special case? I can't believe nobody else has
wanted to implement a server like this.

It's a special case in the context of all of the servers, proxies, transparent proxies and browsers together out there on the net - it's useful to take off the load of your server, but at the cost of _increasing_ the load on transparent proxies on the net.


That's not to say that making an attempt to reduce the load on your server is a bad idea or even a rare occurence (it's not), it's just that changing an RFC to do it is not the right way to achieve this.

> If you want to have a setup
where there is a heavy backend app server, with a lightweight reverse
proxy front end, and you also want to have pages be cached, AND have
personalization of pages based on cookies, then it makes perfect sense
to store user options in a cookie, and then for the pages to be cached
taking cookies into account.

There is already a mechanism for caching different variants of a page - simply encode the info into the URL. This is supported on all browsers and cannot be switched off through user preference (as cookies can). Because a mechanism already exists, there isn't much point in changing the standard to accomodate a second method to do the same thing.


But you're also fighting with existing websites that use cookies to try and track individual requests, and there are a lot of them out there. If each different cookie was cached separately, then you're effectively caching separate copies of every page, which makes caching a waste of time.

Regards,
Graham
--



Reply via email to