On Fri, Jun 21, 2013 at 10:53 AM, Henning Rogge <[email protected]> wrote: > On Fri, Jun 21, 2013 at 7:25 PM, Dave Taht <[email protected]> wrote: >> On Fri, Jun 21, 2013 at 10:04 AM, Henning Rogge <[email protected]> >> wrote: >> At the lowest timescale possible on a given link you are >> either at capacity 1 or 0. > > Same problem for packetloss based metrics in mesh networks. They are > awful to measure unless a link has lots of traffic (or you generate > lots of probing traffic). > >> I need to get around to monitoring packet drops better via mrtg in >> particular and add some more instrumentation to the kernel as to when >> and why they happen... > > The mac80211 driver gives you at least some statistics on how much > traffic dropped (and on some wifi drivers how many retransmissions > happened).
I would like to get packet drop statistics at every level in the stack unified and presentable in a reasonable format. Toke has done some preliminary work in this area, but much more remains. https://lists.bufferbloat.net/pipermail/bloat-devel/2013-June/000436.html Felix has made minstrel stats more mildly accessible inside the kernel as well, but much work remains there too. The current sysfs method is cumbersome.... >> And I keep wondering about what traffic on wifi "really" looks like in >> relation to this wonderful, old, paper, that has embedded itself in >> many a conciousness... > > I would say it looks very chaotic. I would say it should be measured. Extrapolating from 1980s ethernet behavior to today is quite a jump. I got a ton of captures from the gathering a few months ago that may be useful. One thing I should note for those using present barrier breaker is the default fq_codel quantum of 300 probably has bad scaling properties for a set of "many" wifi nodes, and that (until we have per station queuing and a few other mods in the (sadly unfunded) pipeline) a different filter should be used to sort on destination mac address rather than the five-tuple, with a larger quantum (4500?). The present setting works pretty well on small networks but... > Its the bane of every routing metric in a mesh... how to get the > metric both stable enough to be useful and fast enough to be useful. I tend to view the problem underneath - getting good statistics - as crucial to getting to where a metric could work. Certainly minstrel - on a link that is active - provides a set of potentially very useful passive measurements (and was recently made the default rate control in linux) - passive measurements that can be made more active if needed. but certainly RTT and loss isn't bad, and is more generic and flexible than tying things to minstrel... A bad link today - on unicast - can exhibit seconds of latency when you factor in rates and retries. A problem is that multicast doesn't really behave like unicast does, anymore, in wifi, which is kind of something I hinted at earlier today in another message. > > Henning Rogge > > -- > We began as wanderers, and we are wanderers still. We have lingered > long enough on the shores of the cosmic ocean. We are ready at last to > set sail for the stars - Carl Sagan -- Dave Täht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html _______________________________________________ Babel-users mailing list [email protected] http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/babel-users

