On Fri, Jun 26, 2015 at 9:18 AM, Jonathan Morton <chromati...@gmail.com> wrote:
> Hypothesis: this might have to do with the receive path. Some devices might
> have more capacity than others to buffer inbound packets until the CPU can
> get around to servicing them.

*Good* hypothesis. I am certain I have seen this on multiple occasions
on other hardware. Hard to confirm.

Wet paint... so I finally got off my arse and looked at the driver this morning.

given that this is a multi-core box, I would lean towards a smaller
napi_poll_weight, which unfortunately is a constant (64) in the code.
4 cores can take interrupts faster. (and I hate napi on routers) I
have sometimes longed for an IQL (ingress queue limits) to also handle
differences in packet size, dynamically changing the poll weight based
on load - increasing it for loads with lots of small packets,
decreasing it for lots of big packets.

Furthermore this thing is doing software gro (up to 64 packets at a
time) which is a LOT of processing at this layer in the stack.

Its a two line patch to cut the weight to 16, but I have never managed
to get a working build for this platform.

> - Jonathan Morton
>
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>



-- 
Dave Täht
worldwide bufferbloat report:
http://www.dslreports.com/speedtest/results/bufferbloat
And:
What will it take to vastly improve wifi for everyone?
https://plus.google.com/u/0/explore/makewififast
_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to