On Mar 4, 2014, at 9:12 AM, Eggert, Lars <[email protected]> wrote: >> it looks like (in japan at least) TCP is very rarely controlled by packet >> loss (dupack or timeout) but more by >> sender or receiver rate limiting (or just being too short lived:)
It would be interesting to know their delay variation. You’ve seen my famous “9 second delay” graphic. There was no packet loss at all in that… You have also seen, I believe, my annotated Shepherd Diagram of an upload to Picasa. That was from Akasaka, and had three drops in a five second window, resulting in the session spending 40% of its duration underrunning available capacity. It would be interesting to know the traffic mix, the line speeds and latencies end to end, and so on. From my perspective, it’s Really Hard to say “the internet acts this way”; consider the problem of the six blind philosophers and the elephant... What I think I *can* say is that I measured something in a certain way in a particular topological place at a particular time and with a particular workload, I analyzed it in a certain way, and in that measurement and analysis I observed ... something. If you want my guess at what the Japanese trace measured, it had upwards of 50 MBPS end to end and "enough" buffer at that rate in the bottleneck switch to prevent tail-drop loss in the ambient workload. Short sessions, which predominate, would not touch that, and high volume sessions might, as you say, self-limit in one of several ways. For comparison, yesterday, I took 24 hours of tcpdump trace on my laptop and wrote a reduction script. I started out by capturing 38 hours of traces earlier in the week in one hour chunks, and discovered that tcpdump zero-bases its data structures when it switches output files. Then I took a single 24 hour trace file. In that reduction, I distinguished between microflows *from* me and microflows *to* me (where “me” might be my IPv4 or my IPv6 address or name), which would be the two halves of a TCP session. I also threw out sessions that didn’t make sense to me, such as ones that might have already been open when I started the trace. Reason? I have asymmetric bandwidth (12 MBPS down and 2 MBPS up, sez the contract, and I think that’s interpreted as “at least”, as I have seen higher), and I expect the two directions to behave a little differently. Rates are in kilobits/second, and all numbers are for a session. I have TCP sessions that are as short as a single packet each way (data/RST, for whatever reason I might receive such things, and maybe SYN/SYN-ACK) and pipelined tcp connections lasting the better part of an hour (I opened all of my face:b00c friends’ pages, which moved quite a bit of data, all using IPv6). my flows: 10548 my retransmissions: 4009 my packets: min=1 median=10 95%=33 max=73732 my bytes: min=1 median=2493 95%=19608 max=697486314 my durations: min=0.002751 median=58.096561 95%=120.355108 max=35764.656936 my kbps: min=0.000074 median=0.577529 95%=17.171851 max=1049048788.929813 his flows: 14977 his retransmissions: 2859 his packets: min=1 median=9 95%=104 max=181542 his bytes: min=1 median=3795 95%=110354 max=221579702 his durations: min=0.000015 median=46.146412 95%=148.102106 max=35764.620901 his kbps: min=0.000459 median=0.928163 95%=114.575466 max=22604.513177 There are some weird questions I want to understand about the “max” fields. I edited out sessions that were open when I started the trace, of which there were a few. There are a couple of other strange sessions. One of these days I might sort out the difference between 14977 and 10548. But I think the bottom line is that while the median session in my home office probably doesn’t incur a loss, it looks to me like the ones at the 95th percentile for size probably does - and maybe several.
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ aqm mailing list [email protected] https://www.ietf.org/mailman/listinfo/aqm
