Hi John,
On April 8, 2016 3:55:21 PM GMT+02:00, John Yates <[email protected]> wrote:
>Sebastian,
>
>Recently you wrote:
>
>In your case select Ethernet with overhead, and manually put 24 into
>these
>> packet overhead field, as the kernel already accounted for 14 of the
>total
>> of 38.
>>
>
>and further down: mi
>
>
>> I assume in your case this will not change your results a lot, as you
>most
>> likely used full MTU packets in the test flows, I believe your low
>latency
>> under load increases show that you have not much buffer bloat left...
>I
>> would still recommend to use the correct per packet overhead to be on
>the
>> right side...
>>
>
>As a layman hoping to use some of this great technology on my forth
>coming
>Omni Turris boxes how am I supposed to derive these num Mkbers? Is there
>a
>simple series of questions and answers that could figure them out?
Well, yes and no. Yes there is an underlaying implicit decision tree
that could be turned into a series of questions that allow to deduce the
required per packet overhead. And no as far as I know there is no explicit
version of this.
In the end it really is not as complicated as it may seem:
0) decide whether you really need an explicit traffic shaper at all.
If no you are done, else
1) research what kind of link you want to shape for, as different link
technologies have different overheads.
So typically you should ask yourself what or where the usual bottleneck link in
your internet connection is located, shaping works best for links that are have
fixed parameters. Good examples of technologies and links to shape for are e.g.
ethernet for an active fiber link (FTTH) or ATM for old ADSL links, or PTM for
vdsl2 ( FTTC):
A) Ethernet. This is the signed easiest, just figure out what is used on the
wire. Typically that contains the the overhead described in my mail to Richard
with the potential addition of up to two VLAN tags (4bytes each).
B) almost ethernet. Some technologies carry Ethernet frames so are similar to
ethernet but don't deal in all components of Ethernet, e.g. vdsl2's PTM
usesmuch of an Ethernet header but uses no preamble or ifg, but adds a few
specific options.
C) ATM. This is both complicated and 'solved' at the same time, see
https://github.com/moeller0/ATM_overhead_detector for references. Note this not
only affects the per packet overhead but also the link rate to payload rate
calculations due to quantized 48/53 encoding.
D) PTM. Similar to ethernet except it introduces a few potential overhead items
that are not shared with Ethernet and it also affects payload rate calculations
due to 64/65encoding.
So ideally at this point you know what true rate your bottleneck link allows
and what overhead is added to each packet. Now the only step left is to figure
out how much overhead the kernel already added to each packet's wire-size and
reduce the configured per packet overhead by that amount.
The devil, as so often is in the details ;)
'
>Could
>those be turned into some kind of "wizard"?
I am pretty sure this could be turned into a series of question's or
rather a sequential recipe but it will stay laymen unfriendly (with the
potential exception of ATM) as it requires intermediate type technical
knowledge about behaviour or links outside the user's direct control. Typically
your ISP would know all this, but there does not seem to be a universal way to
query an ISP to get the required information.
Otherwise how do you
>expect
>large numbers of users to get their systems properly configured?
Honestly, I believe the only real hope for the masses would be a
universal adoptation of BQL (byte queue limit) in all CPE and CMTs/DSLAMs...
That or doing a bit of research them selves.
>Surely
>you do not want to engage in an email thread with each user who even
>attempts such configuration :-)
I agree, I believe the gist of a few of such discussions should be
turned into an FAQ...
Best Regards
Sebastian
>
>/john
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________
Cerowrt-devel mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cerowrt-devel