On 2 May 2016 at 21:40, Dave Taht <[email protected]> wrote:
> On Mon, May 2, 2016 at 7:03 AM, Roman Yeryomin <[email protected]> wrote:
>> On 1 May 2016 at 17:47,  <[email protected]> wrote:
>>> Maybe I missed something, but why is it important to optimize for a UDP 
>>> flood?
>>
>> We don't need to optimize it to UDP but UDP is used e.g. by torrents
>> to achieve higher throughput and used a lot in general.
>
> Torrents use uTP congestion control and won't hit this function at
> all. And eric just made fq_codel_drop more efficient for tests that
> do.
>
> There are potentially zillions of other issues with ampdu's, txop
> usage, aggregate "packing", etc that can also affect and other
> protocools.
>
>> And, again, in this case TCP is broken too (750Mbps down to 550), so
>> it's not like Dave is saying that UDP test is broken, fq_codel is just
>> too hungry for CPU
>
> "fq_codel_drop" was too hungry for cpu. fixed. thx eric. :)
>
> I've never seen ath10k tcp throughput in the real world (e.g not wired
> up, over the air) even close to 750 under test on the ath10k (I've
> seen 300, and I'm getting some better gear up this week)... and
> everybody tests wifi differently.

perhaps you didn't have 3x3 client and AP?

> (for the record, what was your iperf tcp test line?). More people
> testing differently = good.

iperf3 -c <server_ip> -t600

> Did fq_codel_drop show up in the perf trace for the tcp test?

yes, but it was less hungry, something about 15-20% if I remember correctly

> (More likely you would have seen timestamping rise significantly for
> the tcp test, as well as enqueue time)
>
> That said, more people testing the same ways, good too.
>
> I'd love it if you could re-run your test via flent, rather than
> iperf, and look at the tcp sawtooth or lack thereof, and the overall
> curve of the throughput, before and after this set of commits.

I guess I should try flent but the performance drop was too evident
even with iperf

> Flent can be made to run on osx via macports or brew. (much easier to
> get running on linux) And try to tag along on observing/fixing low
> wifi rate behavior?
>
> This was the more recent dql vs wifi test:
>
> http://blog.cerowrt.org/post/dql_on_wifi_2/
>
> and series.
>
>>> A general observation of control theory is that there is almost always an 
>>> adversarial strategy that will destroy any control regime. Sometimes one 
>>> has to invoke an "oracle" that knows the state of the control system at all 
>>> times to get there.
>>>
>>> So a handwave is that *there is always a DDoS that will work* no matter how 
>>> clever you are.
>>>
>>> And the corollary is illustrated by the TSA. If you can't anticipate all 
>>> possible attacks, it is not clearly better to just congest the whole system 
>>> at all times with controls that can't possibly solve all possible attacks - 
>>> i.e. Security Theater. We don't want "anti-DDoS theater" I don't think.
>>>
>>> There is an alternative mechanism that has been effective at dealing with 
>>> DDoS in general - track the disruption back to the source and kill it.  
>>> (this is what the end-to-end argument would be: don't try to solve a 
>>> fundamentally end-to-end problem, DDoS, solely in the network [switches], 
>>> since you have to solve it at the edges anyway. Just include in the network 
>>> things that will help you solve it at the edges - traceback tools that work 
>>> fast and targeted shutdown of sources).
>>>
>>> I don't happen to know of a "normal" application that benefits from UDP 
>>> flooding - not even "gossip protocols" do that!
>>>
>>> In context, then, let's not focus on UDP flood performance (or any other 
>>> "extreme case" that just seems fun to work on in a research paper because 
>>> it is easy to state compared to the real world) too much.
>>>
>>> I know that the reaction to this post will be to read it and pretty much go 
>>> on as usual focusing on UDP floods. But I have to try. There are so many 
>>> more important issues (like understanding how to use congestion signalling 
>>> in gossip protocols, gaming, or live AV conferencing better, as some 
>>> related examples, which are end-to-end problems for which queue management 
>>> and congestion signalling are truly crucial).
>>>
>>>
>>>
>>> On Sunday, May 1, 2016 1:23am, "Dave Taht" <[email protected]> said:
>>>
>>>> On Sat, Apr 30, 2016 at 10:08 PM, Ben Greear <[email protected]> 
>>>> wrote:
>>>>>
>>>>>
>>>>> On 04/30/2016 08:41 PM, Dave Taht wrote:
>>>>>>
>>>>>> There were a few things on this thread that went by, and I wasn't on
>>>>>> the ath10k list
>>>>>>
>>>>>> (https://www.mail-archive.com/[email protected]/msg04461.html)
>>>>>>
>>>>>> first up, udp flood...
>>>>>>
>>>>>>>>> From: ath10k <[email protected]> on behalf of Roman
>>>>>>>>> Yeryomin <[email protected]>
>>>>>>>>> Sent: Friday, April 8, 2016 8:14 PM
>>>>>>>>> To: [email protected]
>>>>>>>>> Subject: ath10k performance, master branch from 20160407
>>>>>>>>>
>>>>>>>>> Hello!
>>>>>>>>>
>>>>>>>>> I've seen performance patches were commited so I've decided to give it
>>>>>>>>> a try (using 4.1 kernel and backports).
>>>>>>>>> The results are quite disappointing: TCP download (client pov) dropped
>>>>>>>>> from 750Mbps to ~550 and UDP shows completely weird behavour - if
>>>>>>>>> generating 900Mbps it gives 30Mbps max, if generating 300Mbps it gives
>>>>>>>>> 250Mbps, before (latest official backports release from January) I was
>>>>>>>>> able to get 900Mbps.
>>>>>>>>> Hardware is basically ap152 + qca988x 3x3.
>>>>>>>>> When running perf top I see that fq_codel_drop eats a lot of cpu.
>>>>>>>>> Here is the output when running iperf3 UDP test:
>>>>>>>>>
>>>>>>>>>      45.78%  [kernel]       [k] fq_codel_drop
>>>>>>>>>       3.05%  [kernel]       [k] ag71xx_poll
>>>>>>>>>       2.18%  [kernel]       [k] skb_release_data
>>>>>>>>>       2.01%  [kernel]       [k] r4k_dma_cache_inv
>>>>>>
>>>>>>
>>>>>> The udp flood behavior is not "weird".  The test is wrong. It is so
>>>>>> filling
>>>>>> the local queue as to dramatically exceed the bandwidth on the link.
>>>>>
>>>>>
>>>>> It would be nice if you could provide backpressure so that you could
>>>>> simply select on the udp socket and use that to know when you can send
>>>>> more frames??
>>>>
>>>> The qdisc version returns  NET_XMIT_CN to the upper layers of the
>>>> stack in the case
>>>> where the dropped packet's flow = the ingress packet's flow, but that
>>>> is after the
>>>> exhaustive search...
>>>>
>>>> I don't know what effect (if any) that had on udp sockets. Hmm... will
>>>> look. Eric would "just know".
>>>>
>>>> That might provide more backpressure in the local scenario. SO_SND_BUF
>>>> should interact with this stuff in some sane way...
>>>>
>>>> ... but over the wire from a test driver box elsewhere, tho, aside
>>>> from ethernet flow control itself, where enabled, no.
>>>>
>>>> ... but in that case you have a much lower inbound/outbound
>>>> performance disparity in the general case to start with... which can
>>>> still be quite high...
>>>>
>>>>>
>>>>> Any idea how that works with codel?
>>>>
>>>> Beautifully.
>>>>
>>>> For responsive TCP flows. It immediately reduces the window without a RTT.
>>>>
>>>>> Thanks,
>>>>> Ben
>>>>>
>>>>> --
>>>>> Ben Greear <[email protected]>
>>>>> Candela Technologies Inc  http://www.candelatech.com
>>>>
>>>>
>>>>
>>>> --
>>>> Dave Täht
>>>> Let's go make home routers and wifi faster! With better software!
>>>> http://blog.cerowrt.org
>>>> _______________________________________________
>>>> Make-wifi-fast mailing list
>>>> [email protected]
>>>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>>>>
>>>
>>>
>>> _______________________________________________
>>> Make-wifi-fast mailing list
>>> [email protected]
>>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>
>
>
> --
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
> http://blog.cerowrt.org
_______________________________________________
Codel mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/codel

Reply via email to