On 7/2/07, Robert O'Callahan <[EMAIL PROTECTED]> wrote:
On 7/2/07, Robert Sayre <[EMAIL PROTECTED]> wrote:

> Basically, I think offline caches should respect the Vary: HTTP
> header, and maybe more. Applications will need to do this right
> anyway, if they want to function correctly in the presence of ISP HTTP
> proxies (AOL, TMobile, etc),  corporate firewalls, and server-side
> stuff like Citrix Netscalers.

No they don't. For example, they can just use Cache-Control:private to
bypass those caches. That's what GMail does.

Yes, I should have mentioned that I don't think an Offline API will be
able to handle Cache-Control:private stuff better than other proxies
unless it reinvents other HTTP caching mechanisms.


> To me, it looks like the caching mechanisms in HTTP 1.1 can satisfy
> this requirement. I think Rob is correct that it adds substantial
> complexity, but it is already required.

In what way is it already required? Browsers are not required to store
multiple resources for the same URI. We don't; we just use Vary to help
(in)validate the resource we've got.

I mean that it is required for web application authors that want to
scale cheaply and have personalized pages. I don't think you agree
with me.


So how would you use Vary here, anyway? Serve pages with "Vary: Cookie"? I
guess that could work, but app authors would have to pass no cookies except
for the session cookie. That could be difficult.

Or you could standardize the cookie value in some way.

Using an HTTP response header to specify how a URI can map to multiple
resources is a good idea, though. It avoids ambiguities and offers a simple
default. If we have to have that feature, this seems like a good way to do
it.

Etag and Content-Location could be used.

<http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.6>

--

Robert Sayre

"I would have written a shorter letter, but I did not have the time."

Reply via email to