So, a bit of back story on this... It's kind of "my fault" that the AF line has flow control :P
At Performant Networks we decided to do some RFC2544 testing of AirFibers (with Chuck watching via a Sykpe call). Turns out they didn't like that too much. Once they added flow control, there was much less packet loss, and higher throughput. That said, that is a very demanding test. We fully saturated the radio link as much as physically possible until they fell over. These are 750Mbps radios (AF24) with very small buffers to keep latency low - which means when trying to push a gig of the smallest possible packets through the radio, you are absolutely going to have drops. On Nov 7, 2016 12:36 PM, "Judd Dare" <[email protected]> wrote: > Just to follow up on this. I'm working on a network at the moment, > diagnosing port issues on the Netonix. We had some bad crimps (I'm 6000 > miles away) and we've been seeing some CRC errors and intermittent > connection loss on AF5x links. Pretty sure it's the ethernet connections > and finally getting 1G connections instead of 100M like we've had the last > few weeks. > > Anyway, while in the Netonix, I notice the UBNT 100M AP's are running at > 100M with flow control negotiated by default. The AF5x's are not flow > control enabled and are running at GigE. I won't claim this is optimal, > because I still haven't fine tuned this network, work in progress. > > But there is a thread here where UBNT chimed in and said they suggest flow > control on AF5x, etc. > > I think the need for flow control may vary by switch manufacturer and > ethernet chipset as well as by implementation from the vendor. So YMMV, > try some FC and see what happens. > > https://community.ubnt.com/t5/airFiber/Flow-Control-is-Off/m-p/1157296 > > On Mon, Nov 7, 2016 at 9:50 AM, Josh Reynolds <[email protected]> > wrote: > >> In addition, imagine the following scenario: >> >> 8 port switch at a tower. Gigabit Ethernet port. 7 ports on the switch, >> all have 100Mbps links to APs. >> >> Buffer architecture is a shared memory design, with say 4MB available. >> >> Do the math. The buffer on that switch is going to fill up very quickly, >> increasing latency across the board with tons of retransmissions upstream >> due to the lost data. >> >> Sadly, this is every switch in the WISP space! >> >> Switches are needed in this scenario that: >> (A) Are managed >> (B) have sufficient per port buffers >> (C) are dscp / tos aware >> [Optionally] >> (D) support buffer monitoring via SNMP, including queue drops >> >> On Nov 7, 2016 10:39 AM, "Josh Reynolds" <[email protected]> wrote: >> >>> I would agree, but sadly WISP networks are full of 100Mbps links AND a >>> ton of variable bandwidth ptp and ptmp links. >>> >>> You will have to buffer to have any kind of meaningful throughput, >>> otherwise Bandwidth Delay Product calculations will drive your throughout >>> into the dirt. >>> >>> Buffer BLOAT is bad. Buffering is not inherently bad, and is often >>> necessary. >>> >>> On Nov 7, 2016 10:35 AM, "Fred Goldstein" <[email protected]> wrote: >>> >>>> On 11/7/2016 11:05 AM, Josh Reynolds wrote: >>>> >>>> Sorry, correction layer 4. TCP slow start and window sizing. >>>> >>>> Allowing l2 to control your drops in a willy nilly fashion though is >>>> not a good idea... And random "pauses" on your backbone is also a poor >>>> idea. >>>> >>>> The idea is to smooth out the flow end to end; it's the big bursts that >>>> cause trouble. >>>> >>>> I'm of the opinion that WISP networks likely need to move to deep >>>> buffer data center switch designs, simply because of the number of variable >>>> speed links. >>>> >>>> >>>> No, I prefer the opposite. Bufferbloat is bad! The math shows that you >>>> basically don't need a buffer bigger than 10 packets or so. But with QoS >>>> classmarking, you may need multiple buffers. >>>> >>>> On Nov 7, 2016 9:53 AM, "Fred Goldstein" <[email protected]> wrote: >>>> >>>>> On 11/7/2016 10:40 AM, Josh Reynolds wrote: >>>>> >>>>> Negative, layer2 flow control is an axe when you need a scalpel. Turn >>>>> it off everywhere! >>>>> >>>>> Layer3 has automatic mechanisms to help handle bandwidth saturation, >>>>> and packet loss is part of that process. Furthermore, proper ToS/DSCP >>>>> queueing is equally important. >>>>> >>>>> >>>>> Well, technically no, Layer 3 has NO mechanisms to deal with capacity. >>>>> It was a known issue among the network working group members in 1973 and a >>>>> known issue in 1974 when TCP v1 was written, but the team had turned over >>>>> by 1978 when TCP/IPv4 came out, and that group forgot about it until 1986 >>>>> when things fell apart. The temporary short-term not very good fix was in >>>>> layer 4 (TCP Slow Start) and that doesn't even apply to all streaming, >>>>> though many do cooperate. Of course it was "good enough", so 30 years >>>>> later >>>>> it is taken as gospel. TCP/IP is the *chabuduo *of protocol stacks. >>>>> >>>>> There could be issues with using flow control on the Ethernet port, >>>>> but really flow control should have been part of every layer. Loss should >>>>> generally be localized. >>>>> >>>>> On Nov 7, 2016 9:36 AM, "Judd Dare" <[email protected]> wrote: >>>>> >>>>>> So you're saying, make sure Flow Control is enabled on the ports? >>>>>> >>>>>> On Mon, Nov 7, 2016 at 8:22 AM, Josh Reynolds <[email protected]> >>>>>> wrote: >>>>>> >>>>>>> Microbursts causing buffer drops on egress ports to non-10G capable >>>>>>> destinations. The switch wants to send data at a rate faster than the 1G >>>>>>> devices can take it in, so it has to buffer it's data on those ports. >>>>>>> Eventually those buffers fill up, and it taildrops traffic. TCP flow >>>>>>> control takes over and eventually slows the transfer rate by reducing >>>>>>> window size. It doesn't matter if its only sending 100M of data, its the >>>>>>> RATE that it is sending the data. >>>>>>> >>>>>>> On Nov 7, 2016 8:58 AM, "TJ Trout" <[email protected]> wrote: >>>>>>> >>>>>>>> I have a 10G switch that is switching everything of mine at my NOC, >>>>>>>> including peers, router wan, router lan, uplink to tower, etc >>>>>>>> >>>>>>>> During peak traffic periods ~2gbps I'm seeing 1% packet loss and >>>>>>>> throughput will drop to 0 for just a second and resume normal for a few >>>>>>>> minutes before dropping back to zero for just a second. doesn't seem >>>>>>>> to be >>>>>>>> affecting the wan side of my router which connects to peers through the >>>>>>>> same switch. Doesn't happen during the day with low periods of traffic. >>>>>>>> >>>>>>>> I've enabled / disabled STP, Flow control. >>>>>>>> >>>>>>>> I believe I've isolated it to not be a single port, possibly have a >>>>>>>> bad switch but that seems hard to believe... >>>>>>>> >>>>>>>> Port isn't flapping, getting small amounts of fcs errors on receive >>>>>>>> and lots of length errors but i think those shouldn't be a problem? >>>>>>>> >>>>>>>> It's an IBM G8124 10G switch >>>>>>>> >>>>>>>> Ideas? >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Wireless mailing list >>>>>>>> [email protected] >>>>>>>> http://lists.wispa.org/mailman/listinfo/wireless >>>>>>>> >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> Wireless mailing list >>>>>>> [email protected] >>>>>>> http://lists.wispa.org/mailman/listinfo/wireless >>>>>>> >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Wireless mailing list >>>>>> [email protected] >>>>>> http://lists.wispa.org/mailman/listinfo/wireless >>>>>> >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Wireless mailing >>>>> [email protected]http://lists.wispa.org/mailman/listinfo/wireless >>>>> >>>>> -- >>>>> Fred R. Goldstein k1io fred "at" interisle.net >>>>> Interisle Consulting Group >>>>> +1 617 795 2701 >>>>> >>>>> _______________________________________________ Wireless mailing list >>>>> [email protected] http://lists.wispa.org/mailman/listinfo/wireless >>>> >>>> _______________________________________________ >>>> Wireless mailing >>>> [email protected]http://lists.wispa.org/mailman/listinfo/wireless >>>> >>>> -- >>>> Fred R. Goldstein k1io fred "at" interisle.net >>>> Interisle Consulting Group >>>> +1 617 795 2701 >>>> >>>> >>>> _______________________________________________ >>>> Wireless mailing list >>>> [email protected] >>>> http://lists.wispa.org/mailman/listinfo/wireless >>>> >>>> >> _______________________________________________ >> Wireless mailing list >> [email protected] >> http://lists.wispa.org/mailman/listinfo/wireless >> >> > > _______________________________________________ > Wireless mailing list > [email protected] > http://lists.wispa.org/mailman/listinfo/wireless > >
_______________________________________________ Wireless mailing list [email protected] http://lists.wispa.org/mailman/listinfo/wireless
