Am 21.08.2019 um 15:20 schrieb Dave Taht:
On Wed, Aug 21, 2019 at 12:51 AM Sebastian Gottschall
<[email protected]> wrote:
i have seen this already. out plan here is that the user specifies the internet 
connection type like vdsl2, cable, whatever in case of cake which then will be 
used
as argument
       Good goal, that also is theoretically well supported by cake with its 
multitude of encapsulation/overhead realated keywords. Unfortunately reality is 
not as nice and tidy as this collection of keywords implies, There are 8 
keywords for ATM/AAL5 based encapsulations (ADSL, ADSL2, ADSL2+, ...), 2 for 
VDSL2, 1 for DOCSIS, 1 for ethernet, for a total of 12 that all can be combined 
with one or more VLAN-tag keywords, for a total of 24 to 36 combinations. (And 
these are not even exhaustive, as e.g. the use of ds-lite can increase the 
per-packet overhead for IPv4 packets by another 20 bytes).
       Ideally one would just empirically measure the effective overhead and use the 
"overhead NN mpu NN" keywords instead, but that has issues as measuring 
overhead empirically is simply hard... The best bet would be to leverage BEREC to require 
ISPs to explicitly inform their customers of the effective gross-rates and applicable 
overheads for each link, but I am not holding my breath. Over at 
https://openwrt.org/docs/guide-user/network/traffic-shaping/sqm we tried to give 
simplified instructions for setting the overheads for different access technologies, but 
these are not guaranteed to fit everybody (not even most users, as we have no numbers 
about the relative distributions of the different encapsulation options).

Best Regards
       "another" Sebastian
as i said. i just started. lets see if i can find a better solution or a
clever way of auto detecting/measuring the overhead
+1. One of my favorite feynman sayings is "disregard" and we need new
thinking here.

I note that I maintain anywhere between 6-16 flent (netperf and irtt)
servers around the world,
and they are mostly underused....

Sometimes I've thought that a "right" approach would be to send a 10
sec full udp burst,
each packet pre-timestamped internally, at, say, 100Mbit...
and then measure "smoothness" at the receiver and ifconfig interface
(accounting for any other
traffic along the way).
not sure if 16 are enough if we have to handle millions of home routers. so a dynamic on the fly approach or more deteministic would be cool too. i mean mtu measuring is simple by testing the tcp fragmentation, but this doesnt help for ipv6 which doesnt allow fragmentation in a easy way


Sebastian





_______________________________________________
Cake mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cake

Reply via email to