Michael T. Babcock wrote:
Continued from original bugzilla repsonse @ http://bugzilla.mozilla.org/show_bug.cgi?id=180777

[EMAIL PROTECTED] changed:

no, i am speaking from the point of view of mozilla. how can mozilla discern the cause of a delay?? it cannot know that a delay means it should move onto another URL on the page. why should the second URL not be delayed just as long?

Its not that hard to realise that _no_ data has come from the server for that object at all, but that the server _has_ responded to other requests. Moving on to the next request and letting the first request's data show up if its going to show seems more intelligent than waiting until $timeout for things to be done with.
some requests may be static (served up quickly), while others may be dynamic, resulting from some database lookup. tinderbox's page load graphs take a lot longer to load than some of the static content on the multi-graph pages. how can the browser reasonably know the difference between a slow loading database lookup and some other slow loading (never will load) image?



the proxy case i assume. yes, i'm well aware of this case. in this case, connection limits ae even more important. what would happen if the load on those proxy servers suddently doubled? do you really think you'd experience a faster internet? not so, which is why RFC2616 is very clear about the fact that useragents should limit the number of parallel connections.

It seems to me that this is the problem of the proxy server. Squid for example has its own built-in limits for user requests as well as load balancing; these may very well be the job of the browser in direct-connection situations, but I'd argue that they aren't (or not to the same extent) in the case of a proxy server which is acting _in proxy_ for the browser and therefore gains those responsibilities itself.

how can the proxy enforce such limits? by refusing to allow an incoming connection? that would break most useragents. it's simply not the way HTTP works. the proxy could refuse to open more than N persistent connections by adding in a "Connection: close" response header, but that doesn't help minimize congestion. it actually increases congestion, and slows down the network. only the useragent can minimize this congestion by limiting the number of persistent connections. please see RFC 2616 section 8.1.4:

Clients that use persistent connections SHOULD limit the number of
simultaneous connections that they maintain to a given server. A
single-user client SHOULD NOT maintain more than 2 connections with
any server or proxy. A proxy SHOULD use up to 2*N connections to
another server or proxy, where N is the number of simultaneously
active users. These guidelines are intended to improve HTTP response
times and avoid congestion.

Note: mozilla is in violation of this recommendation when speaking to a proxy server because it allows up to 4 persistent connections per proxy. we felt that more were needed because of the very problem you describe. 4 is also what IE allows, so there is precedence for choosing that value.


i wish you would use a newsgroup instead. n.p.m.netlib is for discussing these
sort of things.

Yeah, I'm here. Mind you, I probably will never receive any responses unless they're CC'd to me and although you seem to think this needs discussion first, I still stand by my feature request.
the problem you described is interesting, but the solution you proposed is problematic. what we need to do is discuss in a broader forum how best to solve this problem. perhaps this will attract folks w/ good ideas, and as a result maybe we'll have a better solution. then, finally we can file a bug for the implementation.

bugzilla is not meant to be a design discussion forum. bugzilla has
the disadvantage of being "closed" to only those cc'd on the bug. with
a newsgroup, we stand a better chance of getting more eyes on this problem.

darin




Reply via email to