I just downloaded and seeded 4 popular torrents overnight using the latest version of the transmission-gtk client. I have not paid much attention to this app or protocol of late (about 2.5 years since last I did this), I got a little sparked by wanting to test cdg, but did not get that far.
Some egress stats this morning (fq_codel on the uplink) bytes 32050522339 packets 3379478 dropped 702799 percent 20.80% maxpacket 28614 Some notes: 1) The link stayed remarkably usable: http://snapon.lab.bufferbloat.net/~d/withtorrent/vs64connectedpeers.png This graph shows what happened when one of the 4 torrents completed. The percentage of bandwidth the uplink on this test got was a bit larger than I expected. Subjectively, web browsing was slower but usable, and my other normal usages (like ssh and mosh and google music over quic) were seemingly unaffected. (latency for small flows stayed pretty flat) 2) even with 69 peers going at peak, I generally did not get anywhere near saturating the 100mbit downlink with torrent alone. 3) Offloads are a pita. Merely counting "packets" here does not show the real truth of what's going on (max "packet" of 28614 bytes!?), so linux, benchmarkers, and so on, should also be counting bytes dropped these days. (cake does peeling of superpackets but I was not testing that, and it too does not return bytes dropped) 4) *All* the traffic was udp. (uTP) Despite ipv6 being enabled (with two source specific ipv6 ips), I did not see any ipv6 peers connect. Bug? Death of torrent over ipv6? Blocking? What? 5) transmission-generated uplink traffic seemed "bursty", but I did not tear apart the data or code. I will track queue length next time. 6) Although transmission seems to support setting the diffserv bytes, it did not do so on the udp marked traffic. I think that was a tcp only option. Also it is incorrect for ipv6 (not using IPV6_TCLASS). I had figured (before starting the test) that this was going to be a good test of cake's diffserv support. Sigh. Is there some other client I could use? 7) transmission ate a metric ton of cpu (30% on a i3) at these speeds. 8) My (cable) link actually is 140mbit down, 11 up. I did not much care for asymmetric networks when the ratios were 6x1, so 13x1 is way up there.... Anyway, 20% packet loss of the "right" packets was survivable. I will subject myself to the same test on other fq or aqms. And, if I can force myself to, with no aqm or fq. For SCIENCE! Attention, DMCA lawyers: Please send takedown notices to bufferbloat-research@/dev/null.org . One of the things truly astonishing about this is that in 12 hours in one night I downloaded more stuff than I could ever watch (mp4) or listen to (even in flac format) in several days of dedicated consumption. And it all just got rm -rf'd. It occurs to me there is a human upper bound to how much data one would ever want to consume, and we cracked that limit at 20mbit, with only 4k+ video driving demand any harder. When we started bufferbloat.net 20mbit downlinks were the best you could easily get. -- Dave Täht worldwide bufferbloat report: http://www.dslreports.com/speedtest/results/bufferbloat And: What will it take to vastly improve wifi for everyone? https://plus.google.com/u/0/explore/makewififast _______________________________________________ Bloat mailing list [email protected] https://lists.bufferbloat.net/listinfo/bloat
