On Thu, 2012-07-12 at 09:52 -0400, Justin Lebar wrote: > This is interesting... > > * Does the additional request for ./well-known/hints on every new > connection to an origin have performance implications, for the client > or the server? I thought that we didn't like the hardcoded > favicon.ico hack, and this seems like the same kind of thing.
a number of folks in devops really like the well known uri approach. It has the advantage of not being redundantly included in every transaction, it has well defined caching semantics and its easy to deploy into pretty much any existing server framework. favicon and robots.txt despite their warts have been pretty successful in their mission. I think its kind of yucky myself, but not a deal breaker. If people feel passionately that it is a deal breaker (though it would never cause blocking - see below) I'd love to hear that articulated. > > Can we somehow slipstream this data into an existing HTTP response? > Small files are the bane of fast websites, and even though this file > doesn't block the page, there's still a page-load-time cost to > fetching it, inasmuch as we're not fetching something else at the > time. right - so mitigating opportunity cost is a big deal here. Frankly I think we can do better with favicon too.. e.g. necko is pretty much unaware of favicon - docshell tries to send it at a time that is good (good for just that tab maybe? I dunno) and with a low priority. That's all good - but with a little more awareness necko could make sure we never started a favicon/hint request unless we were otherwise 100% net-quiescent and more importantly did not count that transaction against the parallelism limit. Given the tiny size of these resources not counting it would be ok from a congestion point of view and would remove the opportunity cost problem because the choke point wrt this type of file is free connections, not bandwidth. Doing it that way would mean nothing would ever block on a hint - and that would be a requirement as far as I am concerned. the hint proposal forsees this as highly cachable - so we shouldn't have to repeat it often and perhaps we should only even fetch it for hostnames that are part of the core browsing set for a user though I'm less sure of that last point. > And of course all sites would pay a price, regardless of > whether they used this manifest. sure - but its quite minor and in my view acceptable. Content is growing ~10% every year in byte size and the number of object transactions is going through the roof.. often 100 on a page load now. one very cachable (especially if 404'd)! transaction is very bearable and part of the march of progress - that part doesn't really worry me. > > * It seems to me that much of this proposal is subsumed by SPDY. SPDY > headers and cookies are compressed across the whole SPDY session, so > omitting them doesn't buy us much, right? mostly right. First, its a good insight. So I'm going to make sure to ask potential server side operators exactly that question. But to give an answer I've heard - spdy is just in the nascent stages of standardization and it will change a lot in the coming days. If you're not on board for that ride you shouldn't be deploying it[*] - that's a perfectly valid reason to wait on spdy adoption for the moment. This proposal gives the operator to drop a hint file onto the server that just describes what their reality is already like. Finding ways to tweak and extend HTTP/1 is a necessary thing to do for a while and I'll be looking at a number of different approaches during the transition to HTTP/2. At some point that will be counter productive - I don't know when that will be, but I'm sure we're not there yet :) -Pat [*] I never would have advocated for spdy in firefox at this stage without rapid release and the improved updater process. _______________________________________________ dev-tech-network mailing list [email protected] https://lists.mozilla.org/listinfo/dev-tech-network
