On Sat, Apr 30, 2016 at 10:08 PM, Ben Greear <gree...@candelatech.com> wrote: > > > On 04/30/2016 08:41 PM, Dave Taht wrote: >> >> There were a few things on this thread that went by, and I wasn't on >> the ath10k list >> >> (https://www.mail-archive.com/ath10k@lists.infradead.org/msg04461.html) >> >> first up, udp flood... >> >>>>> From: ath10k <ath10k-boun...@lists.infradead.org> on behalf of Roman >>>>> Yeryomin <leroi.li...@gmail.com> >>>>> Sent: Friday, April 8, 2016 8:14 PM >>>>> To: ath...@lists.infradead.org >>>>> Subject: ath10k performance, master branch from 20160407 >>>>> >>>>> Hello! >>>>> >>>>> I've seen performance patches were commited so I've decided to give it >>>>> a try (using 4.1 kernel and backports). >>>>> The results are quite disappointing: TCP download (client pov) dropped >>>>> from 750Mbps to ~550 and UDP shows completely weird behavour - if >>>>> generating 900Mbps it gives 30Mbps max, if generating 300Mbps it gives >>>>> 250Mbps, before (latest official backports release from January) I was >>>>> able to get 900Mbps. >>>>> Hardware is basically ap152 + qca988x 3x3. >>>>> When running perf top I see that fq_codel_drop eats a lot of cpu. >>>>> Here is the output when running iperf3 UDP test: >>>>> >>>>> 45.78% [kernel] [k] fq_codel_drop >>>>> 3.05% [kernel] [k] ag71xx_poll >>>>> 2.18% [kernel] [k] skb_release_data >>>>> 2.01% [kernel] [k] r4k_dma_cache_inv >> >> >> The udp flood behavior is not "weird". The test is wrong. It is so >> filling >> the local queue as to dramatically exceed the bandwidth on the link. > > > It would be nice if you could provide backpressure so that you could > simply select on the udp socket and use that to know when you can send > more frames??
The qdisc version returns NET_XMIT_CN to the upper layers of the stack in the case where the dropped packet's flow = the ingress packet's flow, but that is after the exhaustive search... I don't know what effect (if any) that had on udp sockets. Hmm... will look. Eric would "just know". That might provide more backpressure in the local scenario. SO_SND_BUF should interact with this stuff in some sane way... ... but over the wire from a test driver box elsewhere, tho, aside from ethernet flow control itself, where enabled, no. ... but in that case you have a much lower inbound/outbound performance disparity in the general case to start with... which can still be quite high... > > Any idea how that works with codel? Beautifully. For responsive TCP flows. It immediately reduces the window without a RTT. > Thanks, > Ben > > -- > Ben Greear <gree...@candelatech.com> > Candela Technologies Inc http://www.candelatech.com -- Dave Täht Let's go make home routers and wifi faster! With better software! http://blog.cerowrt.org _______________________________________________ Codel mailing list Codel@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/codel