Re: BGP Router Hardware Suggestions
On 7/3/23 12:59, Rachel Roth wrote: For the record, "API not working" is not exclusively about mediaopt settings. "API not working" also kills SFP DOM stats, something which is quite useful when troubleshooting with third-parties on the other side of your fibre link. When someone on the other side of the phone asks for "light levels", its rather nice to be able to give them an answer ;-) Yeah, I understand. That is why my first e-mail only recommended a copper-based solution if one insists on using the most recent firmware. Then upon learning that autonegotiation is only a thing for Ethernet over twisted pair, I revised the recommendation more conservatively to only include SFP+ DAC solutions since I would not be surprised if X710 does not work on the most recent firmware for something like the X710-T4L network interface adapter. With something like the X710-DA2-more relevantly for the OP, the X710-DA4-it definitely works, and "light levels" are obviously not a thing. As a result, one might be able to get 10 Gbps if one is willing to use a different controller (e.g., X520), use an older firmware for X710, or use an X710 SFP+ DAC adapter in addition to using "amd fast, not intel fast" CPU cores. Of course as mentioned and likely known, SFP+ DAC may not be a possibility if EMI is an issue, the runs are very long, or fiber is forced upon you by the other end. If not though; then enjoy the cheaper, equally performant, and less power hungry passive SFP+ DAC solution.
Re: BGP Router Hardware Suggestions
2 Jul 2023, 22:58 by z...@philomathiclife.com: > As a result, there is not much to "negotiate" > anyway. In summary if 10GSFP+Cu is acceptable, then you shouldn't worry > about the API not working on OpenBSD. > For the record, "API not working" is not exclusively about mediaopt settings. "API not working" also kills SFP DOM stats, something which is quite useful when troubleshooting with third-parties on the other side of your fibre link. When someone on the other side of the phone asks for "light levels", its rather nice to be able to give them an answer ;-)
Re: BGP Router Hardware Suggestions
On 7/1/23 18:26, Zack Newman wrote: As Rachel pointed out, OpenBSD 7.3 does not work with the API of that NIC when the newest firmware is flashed. Not sure what the most recent version of firmware that has a working API is, but it is not a problem for me since autonegotiation works just fine. If you don't require very long runs and EMI is not an issue, then you should be fine with the copper solution. Ah, I was unaware that autonegotiation was only a thing for Ethernet over twisted pair; so my setup did not rely on autonegotiation which makes it even less surprising that my setup works. 10 Gigabit Ethernet only works in full-duplex, and the SFP+ DAC Twinax cables only work for one speed (i.e., 10 Gbps). As a result, there is not much to "negotiate" anyway. In summary if 10GSFP+Cu is acceptable, then you shouldn't worry about the API not working on OpenBSD.
Re: BGP Router Hardware Suggestions
I don't have any 10 Gbps NICs, so I cannot comment on that level of throughput. I do have a couple 2.5 Gbps machines, and my system saturates them with ease. No way to know if there is 7.5 Gbps more I could get out of them without actually testing it. Motherboard: Supermicro X13SAE flashed with newest BIOS. CPU: Intel i5-13600K with iGPU underclocked. RAM: 2 x 16 GiB DDR5-4400MHz unbuffered ECC modules. Network interface adapter: Intel X710-DA2 flashed with newest firmware. Switch: Juniper EX2300-24MP Server and switch are connected via a dual-compatible SFP+ DAC Twinax cable from FS.com. The network interface adapter is a genuine Intel. It is _not_ an OEM one. I didn't want to deal with any headaches when it comes to flashing firmware. I disabled SMT as well as the efficiency cores on the CPU. I tried to reduce the use of the integrated GPU as much as I could. No inteldrm. Only connected via a serial console. The reason for that CPU is that the equivalent CPU without an iGPU was not officially listed as ECC capable, so I played it safe and got the iGPU version. That CPU only has 6 performance cores though; and as one of Stuart's links showed, this means I am only using 4 queues instead of maxing out the card at 8. Had I known this, I would have gotten the CPU one level up which has 8 performance cores. My machine does a lot more than just routing and firewall though. It runs a web server, git repos, e-mail, DNS, authoritative nameserver, and VPN servers just to name a few things. Despite all that, it handles 2.5 Gbps no problem. I haven't done any form of tuning (e.g., using MTUs larger than 1500) either. As Rachel pointed out, OpenBSD 7.3 does not work with the API of that NIC when the newest firmware is flashed. Not sure what the most recent version of firmware that has a working API is, but it is not a problem for me since autonegotiation works just fine. If you don't require very long runs and EMI is not an issue, then you should be fine with the copper solution.
Re: BGP Router Hardware Suggestions
On 2023-06-29, Lyndon Nerenberg (VE7TFX/VE6BBM) wrote: > We are about to discover the joys of upstream BGP routing :-P The > current plan is to use a pair of OpenBSD+bgpd hosts as the routers. > > Each host will require 4x10gig ports (SFP+). One of those links > (to AWS) will be close to saturated, along with the downlink to our > switches. The other two will only need to carry ~1Gb/s of traffic. > > We are pretty much a Supermicro shop, and I'm wondering if anyone > out there is running a similar setup on SM hardware. My main concern > is finding NICs that will let us squeeze every last drop of bandwidth > on the 10gig links. I don't need full 10G and haven't benchmarked anything recently, but Hrvoje has done a lot of testing in this area, see comments at https://marc.info/?l=openbsd-misc=167665861931266=2 For servers, look at the AMD boards e.g. M11SDV-based systems like https://www.supermicro.com/Aplus/system/Embedded/AS-5019D-FTN4.cfm Sadly Supermicro seem to have stopped doing boards with 4x fibre module slots, so you'll be stuck with needing PCIe NICs for the newer boards. (Newer xeon d boards have 2xSFP28 plus copper; networking on their AMD boards tend to be copper only). I would probably favour ix(4) i.e. X520 (for one thing, firmware is less of a moving target..) > I did run some brief ttcp tests on a pair of SM 1Us (don't have the > model number handy, maybe 5018-FTN4s?) with add-in Intel cards > (550s?) and was able to get 700 MBytes/s of throughput. This would > have been circa the 6.7 or 6.8 releases. A lot changed since then. See some stats over time at http://bluhm.genua.de/perform/results/perform.html (especially forwarding tests). Don't test packet generation on the box itself if you care about forwarding. Generate packets elsewhere and pass them through the device under test.
Re: BGP Router Hardware Suggestions
29 Jun 2023, 23:57 by lyn...@orthanc.ca: > We are about to discover the joys of upstream BGP routing :-P The > current plan is to use a pair of OpenBSD+bgpd hosts as the routers. > > Each host will require 4x10gig ports (SFP+). One of those links > (to AWS) will be close to saturated, along with the downlink to our > switches. The other two will only need to carry ~1Gb/s of traffic. > > We are pretty much a Supermicro shop, and I'm wondering if anyone > out there is running a similar setup on SM hardware. My main concern > is finding NICs that will let us squeeze every last drop of bandwidth > on the 10gig links. > I have had excellent luck on 1g using HotLava (https://www.hotlavasystems.com/) cards which are Intel based but custom design. I am currently working on upgrading parts of my network to 10g, also using HotLava, but sadly the OpenBSD devs need to update the ixl drivers to match the newer Intel API because at the moment stuff like "ifconfig ixl sff" returns nothing when it should return transciever DOM data.
BGP Router Hardware Suggestions
We are about to discover the joys of upstream BGP routing :-P The current plan is to use a pair of OpenBSD+bgpd hosts as the routers. Each host will require 4x10gig ports (SFP+). One of those links (to AWS) will be close to saturated, along with the downlink to our switches. The other two will only need to carry ~1Gb/s of traffic. We are pretty much a Supermicro shop, and I'm wondering if anyone out there is running a similar setup on SM hardware. My main concern is finding NICs that will let us squeeze every last drop of bandwidth on the 10gig links. I did run some brief ttcp tests on a pair of SM 1Us (don't have the model number handy, maybe 5018-FTN4s?) with add-in Intel cards (550s?) and was able to get 700 MBytes/s of throughput. This would have been circa the 6.7 or 6.8 releases. I'm hoping to get >70% of the theoretical bandwidth out of the new hardware, and my gut says it's the NIC that's constraining us. So, I'd be interested in hearing from anyone running a similar setup, or who has benchmarked any of the current crop of 10gig NICs and has good/bad things to say about specific models. Thanks, --lyndon