As an example from a bit different area, in Chrome the Web Workers today require a separate process per worker. It's not good to create too many processes so there is a relatively low limit per origin and higher total limit. Two limits help avoid situation when 1 bad page affects others. Once limit is reached, the worker objects are created but queued, on a theory that pages of the same origin could cooperate, while if a total limit is blown then hopefully it's a temporary condition. Not ideal but if there should be a limit we thought having 2 limits (origin/total) is better then have only a total one.
On Thu, May 13, 2010 at 4:55 PM, Perry Smith <[email protected]> wrote: > > On May 13, 2010, at 12:40 PM, Mike Shaver wrote: > > > The question is whether you queue or give an error. When hitting the > > RFC-ish per-host connection limits, browsers queue additional requests > > from <img> or such, rather than erroring them out. Not sure that's > > the right model here, but I worry about how much boilerplate code > > there will need to be to retry the connection (asynchronously) to > > handle failures, and whether people will end up writing it or just > > hoping for the best. > > Ah. Thats a good question. (Maybe that was the original question.) > > Since web sockets is the topic and as far as I know web sockets are only > used by javascript, I would prefer an error over queuing them up. > > I think javascript and browser facilities have what is needed to create its > own retry mechanism if that is what a particular situation wants. I don't > see driving the retry via a scripting language to be bad. Its not that hard > and it won't happen that often. And it gives the javascript authors more > control and choices. > > Thats my vote... > > pedz > >
