On Thu, 25 Feb 2016, Honza Bambas wrote:
First, how are you detecting slow connections? Just by not receiving data? If so, that is a wrong approach and always was.
I'm not detecting slow connections at all. We attempt to detect stalled HTTP/1 connections where absolutely nothing gets transferred. It can happen when you move your browser between networks. Like when changing between wifi access points.
You have out of bounds ways to detect this - TCP keep alive. I don't remember if that has been put off the table as uncontrollable on all platforms or too faulty. If not, setting the TCP keep-alive timeout _during the time you are receiving a response_ to something reasonable (your 5 seconds) will do the job for you very nicely.
Adaptive keep-alive! I like that. Just a tad bit worried it'll introduce other problems as it'll then make _all_ transfers get that keepalive thing going at a fairly high frequency and not only the rare ones where we get a network change. For example the mobile use cases tend to not like keepalive. But maybe I'm just overly cautious.
And I also think that 5 seconds is a too small interval. At least 10 would do better IMO.
It was mostly just a pick to get something that would still feel good enough for truly stalled connections and yet have a fairly low risk of actually interferring with a slow-to-respond server.
Anything in particular that makes you say that 10 is better? Why not 8 or 12?
Other way is to rate the _response_ and probably also each _connection_. You can measure how fast they are receiving and adjust your timeout according it. Still not a perfect solution and also not simple and also stateful.
From my experience, the slowest responses take a long while for the server to
start responding and in those cases we'd have 0 bytes transfered for a long while and thus nothing to adapt the timeout to at that point. -- / daniel.haxx.se _______________________________________________ dev-tech-network mailing list [email protected] https://lists.mozilla.org/listinfo/dev-tech-network
