On Tue, Aug 30, 2022 at 9:53 AM David P. Reed via Starlink <[email protected]> wrote:
> First, it's worth looking at all the problems currently in WiFi performance > when you share an AP with multiple active stations using 100's of Gb/s on the > average (not just occasionally). > > Dave - you tried in "make-wifi-fast", and the architecture gets in the way > there. (yeah you can get point to point gigabit/sec single file transfers, > but to do that you invoke features that destroy latency and introduce huge > variability if you share the AP at all, for these reasons). Didn't "try", did. :) https://forum.openwrt.org/t/aql-and-the-ath10k-is-lovely/59002/830 Also managed the "high variability" problem to a large extent, being able to 'slide' servicing sparse stations into that budget. ... If you consider being able to effectively multiplex 4 stations at full load in both directions with under 30ms of queuing "good". Many APs to this day, including enterprise ones, are behaving a lot more like our initial results on that link above (if y'all scroll back), but at least test houses like candelatech have tcp test suites for multistation behaviors now and are feeding those back to the vendors. It's very possible to do 100x better than that (call it 300us) in wifi with proper hardware support. WiFi 7 holds the promise of multiple stations being able to transmit on their own subchannel, which I will love if they can make it work, but it has many flaws like "sounding". > > > Starlink is a good "last resort" service as constituted. But fiber and last > few-hundred meters wireless is SO much better able to deliver good Internet > service scalably. Starlink just has to be better than old DSL to succeed. It is, except it's unusable for fps gaming. > Even that assumes fixing the bufferbloat that the Starlink folks don't seem > to be able to address... It's been better lately on uploads. At the lowest tier of service "idle", ~2mbit, it's rather sane. Only when it gets "full" bandwidth from the controller does it get past 150ms now. My guess is it's a tail drop per packet queue, dynamically controlled. Not cake, no fq to be seen, way too much queuing in one direction or another at the higher rates. I basically shaped it to 6mbits up to avoid that behavior and only notice a sat switch when it messes with my mosh or videoconference. Done that way it's been a lot better than cell was. Some other data: is you can always get a small flow through at 20ms intervals nowadays, however, if you attempt to send stuff at 10ms or 5ms intervals I've seen as high as 14% packet drop. I do not understand how that correlates back to service intervals or their uplink *at all*. > _______________________________________________ > Starlink mailing list > [email protected] > https://lists.bufferbloat.net/listinfo/starlink -- FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/ Dave Täht CEO, TekLibre, LLC _______________________________________________ Starlink mailing list [email protected] https://lists.bufferbloat.net/listinfo/starlink
