On Fri, Mar 21, 2014 at 1:41 PM, Toke Høiland-Jørgensen <[email protected]> wrote: > Dave Taht <[email protected]> writes: > >> I imagine with the new tcp's pfifo_fast is going to be sub 8ms also. > > Yeah, turns out I botched the qdisc setup (put it on the wrong interface > on one of the servers) for the case with no switch. So the ~6ms was with > pfifo_fast in one end.
Oh, goodie. I was puzzled as to why the "fast" fq_codel queue was at 6ms instead of under 2ms, given the BQL size and traffic load. You'd think that data centers and distros would be falling over themselves to switch to sch_fq or sch_fq_codel to get 3x less latency than pfifo_fast for sparse flows, at this point. It's just a sysctl away... > Updated the original graphs for the host-to-host. Retaining the pfifo_fast data is important as a baseline. Not a lot of point to graphing it further tho. I think you will find pie's behavior at these speeds bemusing. > Data capture files are > here: http://kau.toke.dk/experiments/cisco-switch/packet-captures/ -- no > idea why the client seems to capture three times as many packets as the > server. None of them seem to think they've dropped any (as per tcpdump > output). > > Will add dumps from going through the switch in a bit... > >> Is your hardware fast enough to run tcpdump -s 128 -w whatever.cap -i >> your interface during an entire rrul test without dropping packets? >> (on client and server) (question to list) Are there any options to tcpdump or the kernel to make it more possible to capture full packet payloads (64k) without loss at these speeds? tshark? (you might be able to get somewhere with port mirroring off the switch and a separate capture device.) /me sometimes likes living at 100Mbit and below > Well, as above, tcpdump doesn't say anything about dropped packets; but > since the client dump is way bigger, perhaps the server-side does anyway? > > -Toke -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html _______________________________________________ Bloat mailing list [email protected] https://lists.bufferbloat.net/listinfo/bloat
