On 2014-08-22, Henning Brauer <hb-open...@ml.bsws.de> wrote:
> * Stuart Henderson <s...@spacehopper.org> [2014-08-22 13:51]:
>> On 2014-08-22, Henning Brauer <hb-open...@ml.bsws.de> wrote:
>> > * Federico Giannici <giann...@neomedia.it> [2014-08-22 09:51]:
>> >> On 08/22/14 08:22, Henning Brauer wrote:
>> >> >* Adam Thompson <athom...@athompso.net> [2014-08-21 19:13]:
>> >> >>Unless I've mis-understood all the emails and reports about this, it 
>> >> >>affects low-bandwidth queues, not low-bandwidth interfaces.
>> >> >>In other words, limiting traffic to 50Mbps on a 1Gb link will work 
>> >> >>fine, limiting it to 50kbps on the same link will not.
>> >> >>Yes/no?
>> >> >pretty much.
>> >> I can imagine that it could be rather complicated to give the exact 
>> >> numbers,
>> >> but can you give me an idea where the problem comes from, and maybe where 
>> >> I
>> >> can find more info about it?
>> > kinda obvious: BW measurement and go/holdoff decision is (at most) once per
>> > tick. ticks @ HZ, aka 100 ticks per second with HZ=100. If the NIC can
>> > transfer "too much" data within one tick, the bw shaping becomes
>> > inaccurate. Obviously worse the bigger the difference between
>> > interface speed and desired queue speed is.
>> Any idea why this was so much less of a problem with altq?
>
> it wasn't... the hfsc core was the same, and cbq worked exactly the same
> way too.
>
> People might not have paid as much attention? I dunno.

If anything I'd expect altq to be less accurate as IIRC it used
getmicrouptime rather than microuptime.... But somehow, my setup with
512K-1Mb queues (pppoe with pppoedev on em0, 100Mb link on a 1Gb nic)
worked ok with altq with default HZ.

Reply via email to