On 2/25/2016 14:41, Daniel Stenberg wrote:
On Thu, 25 Feb 2016, Honza Bambas wrote:

First, how are you detecting slow connections? Just by not receiving data?
If so, that is a wrong approach and always was.

I'm not detecting slow connections at all. We attempt to detect stalled HTTP/1 connections where absolutely nothing gets transferred.

OK, and how do you recognize it? From the problem you described it seemed like you are checking on intervals between chunks you receive in socket transport or connection or transaction. That is the "not receiving data" approach I refer to. That is the approach that never works...

It can happen when you
move your browser between networks. Like when changing between wifi access
points.

You have out of bounds ways to detect this - TCP keep alive. I don't remember if that has been put off the table as uncontrollable on all platforms or too faulty. If not, setting the TCP keep-alive timeout _during the time you are receiving a response_ to something reasonable (your 5 seconds) will do the job for you very nicely.

Adaptive keep-alive! I like that. Just a tad bit worried it'll introduce other problems as it'll then make _all_ transfers get that keepalive thing going at a fairly high frequency

It doesn't need to be a super high frequency. You will only enable it between request sending and last response bit receive (complete). You can also engage it on open connections only after a network change has been detected and only for a short period of time, but I would like to have such a mechanism engaged all the time. Connections may drop not just because your adapter configuration changes.

and not only the rare ones where we get a network change. For example the mobile use cases tend to not like keepalive. But maybe I'm just overly cautious.

On idle connection I would return to a normal t/o logic we have (I think we don't use TCP keep-alive at all on idle conns)


And I also think that 5 seconds is a too small interval. At least 10 would
do better IMO.

It was mostly just a pick to get something that would still feel good enough for truly stalled connections and yet have a fairly low risk of actually interferring with a slow-to-respond server.

Anything in particular that makes you say that 10 is better? Why not 8 or 12?

Yeah :)  that's it!  nobody knows what this should be...


Other way is to rate the _response_ and probably also each _connection_. You can measure how fast they are receiving and adjust your timeout according it. Still not a perfect solution and also not simple and also stateful.

From my experience, the slowest responses take a long while for the server to
start responding and in those cases we'd have 0 bytes transfered for a long
while and thus nothing to adapt the timeout to at that point.


Yep, imperfect solution, right.  TCP k-a would tho catch this.


-hb-


_______________________________________________
dev-tech-network mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-network

Reply via email to