You ran out of queue to store for that rtt.

netem limit 100000 or higher for that.
On Fri, Jul 6, 2018 at 12:25 PM Pete Heist <[email protected]> wrote:
>
>
> > On Jul 6, 2018, at 3:37 PM, Pete Heist <[email protected]> wrote:
> >
> > I’m also satisfied for now that this shouldn’t hold us up. However, what 
> > I’ll also want to try next is netem on a separate veth device from cake. I 
> > believe Dave’s veth testing earlier was using three namespaces / veth 
> > devices, maybe for this reason(?)
>
> So under the category of “unsurprising”, with netem as a root qdisc in a 
> separate namespace/veth device from cake, there is no lockup. The attached 
> script creates an environment with 6 namespaces (client, client qdisc, client 
> delay, server delay, server qdisc, server) with veth devices / bridges 
> between. It’s easy to switch the qdisc or netem params, if anyone needs such 
> a thing.
>
> Only, I wonder if high BDP links can be emulated with netem. Say I want 
> 10gbit with 100ms rtt. If I use “netem delay 50ms” each direction, rtt is 
> correct, but throughput for iperf3 drops from 30Gbit to 450Mbit. CPU is a few 
> percent, so that’s not the issue. Fiddling with TCP window (iperf3 -w) just 
> seems to make it worse. So I’m still figuring out netem, I guess...
>
> Pete
> _______________________________________________
> Cake mailing list
> [email protected]
> https://lists.bufferbloat.net/listinfo/cake



-- 

Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
_______________________________________________
Cake mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cake

Reply via email to