crash and panic using pfsync on 13.1-RELEASE (Bug 268246)

2022-12-08 Thread John Jasen
Hi folks -- I opened this on Freebsd 13.1. https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=268246 I'm stumped, as I have about half a dozen other systems just like this one, which do not exhibit this condition. Don't know if it matters, but this is the backup firewall in a carp configuration.

Re: kern/183970: [ofed] [vlan] [panic] mellanox drivers and vlan usage causes kernel panic and reboot

2014-04-20 Thread John Jasen
I've not checked 9.2, but the 181931 patch has been applied to 10-0-release, and it also fixes my problem. On 04/19/2014 10:14 PM, lini...@freebsd.org wrote: > Old Synopsis: mellenox drivers and vlan usage causes kernel panic and reboot > New Synopsis: [ofed] [vlan] [panic] mellanox drivers and

recommendations on supported 40GbE adapters?

2014-06-10 Thread John Jasen
vendors I should be considering? If anyone else has tried 40GbE cards, I am most interested in your experiences -- especially in stability, performance and performance tuning. Thanks in advance! -- John Jasen (jja...@gmail.com) ___ freebsd-net@freebs

packet forwarding and possible mitigation of Intel QuickPath Interconnect ugliness in multi cpu systems

2014-07-21 Thread John Jasen
Executive Summary: Appropriate use of cpuset(1) can mitigate performance bottlenecks over the Intel QPI processor interconnection, and improve packets-per-second processing rate by over 100%. Test Environment: My test system is a Dell dual CPU R820, populated with evaluation cards graciously pro

Re: packet forwarding and possible mitigation of Intel QuickPath Interconnect ugliness in multi cpu systems

2014-07-21 Thread John Jasen
7 opackets 2910347 odrops 1943.65 On 07/21/2014 11:34 AM, John Jasen wrote: > Executive Summary: > > Appropriate use of cpuset(1) can mitigate performance bottlenecks over > the Intel QPI processor interconnection, and improve packets-per-second > processing rate by over 100%. >

fastforward/routing: a 3 million packet-per-second system?

2014-07-22 Thread John Jasen
Feedback and/or tips and tricks more than welcome. Outstanding questions: Would increasing the number of processor cores help? Would a system where both processor QPI ports connect to each other mitigate QPI bottlenecks? Are there further performance optimizations I am missing? Server Descript

Re: fastforward/routing: a 3 million packet-per-second system?

2014-07-22 Thread John Jasen
On 07/22/2014 01:41 PM, John-Mark Gurney wrote: > John Jasen wrote this message on Tue, Jul 22, 2014 at 11:18 -0400: >> Feedback and/or tips and tricks more than welcome. > You should look at netmap if you really want high PPS routing... Originally, I assumed an interface supportin

Re: fastforward/routing: a 3 million packet-per-second system?

2014-07-24 Thread John Jasen
On 07/24/2014 05:24 AM, Andrey V. Elsukov wrote: > On 22.07.2014 19:18, John Jasen wrote: >> Feedback and/or tips and tricks more than welcome. >> >> Outstanding questions: >> >> Would increasing the number of processor cores help? > AFAIR, increasing th

Re: fastforward/routing: a 3 million packet-per-second system?

2014-07-24 Thread John Jasen
on the transmit paths as drivers > queue frames from one set of driver threads/queues to another > potentially completely different set of driver transmit > threads/queues. > > > > > -a > > > On 22 July 2014 08:18, John Jasen wrote: >> Feedback and/or tips and trick

Re: fastforward/routing: a 3 million packet-per-second system?

2014-07-25 Thread John Jasen
63952779 0 0 0 3439254 /usr/src/sys/netinet/ip_fastfwd.c:593 (sleep mutex:rtentry On Tue, Jul 22, 2014 at 11:18 AM, John Jasen wrote: > Feedback and/or tips and tricks more than welcome. > ___ freebsd-net@freebsd.

Re: fastforward/routing: a 3 million packet-per-second system?

2014-07-26 Thread John Jasen
kely can do better on the rtentry locking..) > > > -a > > > On 25 July 2014 13:51, Adrian Chadd wrote: >> Ugh, the forwarding table stupidity. Try enabling FLOWTABLE as an option. >> >> I really dislike how the rtentry locking works. But that isn't a >>

Re: fastforward/routing: a 3 million packet-per-second system?

2014-07-27 Thread John Jasen
I shouldn't even be coming close to maxflows in this test scenario. net.flowtable.enable: 1 net.flowtable.maxflows: 1042468 On 07/26/2014 10:20 PM, Adrian Chadd wrote: > Flowtable is enabled? That's odd, it shouldn't be showing up like that. > > > > -a > > > _

Re: fastforward/routing: a 3 million packet-per-second system?

2014-07-28 Thread John Jasen
wd for a test and see if the lock profile improves. > (Set debug.lock.prof.reset=1 to clear the profiling data before you do it.) > > > -a > > > On 27 July 2014 05:58, John Jasen wrote: > > I shouldn't even be coming close to maxflows in this test scenario. > >

4 million packets per second: Re: fastforward/routing: a 3 million packet-per-second system?

2014-07-31 Thread John Jasen
.5.1 netmask 255.255.255.0 mtu 9000 -lro -tso up" ifconfig_cxl3="inet 172.16.6.1 netmask 255.255.255.0 mtu 9000 -lro -tso up" ifconfig_cxl0_alias0="inet 172.16.7.1 netmask 255.255.255.0" ifconfig_cxl1_alias0="inet 172.16.8.1 netmask 255.255.255.0" ifconfig_cxl2_alias0=&quo

netmap versus routing/firewalling and confusion

2014-12-10 Thread John Jasen
Is there a complete idiot's guide to netmap, that I've not stumbled upon as of yet? I'm interested in trying to cook up a router/firewall leveraging netmap, but I'm stuck as to how to use it. For example, the cards I'm using, Chelsio 40GbE adapters, create ncxl$number virtual interfaces when net

FreeBSD 10.1: Intel dual port 10GbE card (82599EB): second port not present?

2015-02-11 Thread John Jasen
I have several servers that have two Intel 10GbE ports on board. They're technically Dell daughterboards which have two Intel 1GbE and two 10GbE ports. However, the second ix interface is not accessible, and does not seem to be available. From a brief look, it looks like ix0 and both igb interface

Re: FreeBSD 10.1: Intel dual port 10GbE card (82599EB) second port not present? (Steven Hartland)

2015-02-12 Thread John Jasen
> Date: Wed, 11 Feb 2015 20:47:15 + > From: Steven Hartland > To: freebsd-net@freebsd.org > Subject: Re: FreeBSD 10.1: Intel dual port 10GbE card (82599EB): > second port not present? > Message-ID: <54dbbfd3.7010...@multiplay.co.uk> > Content-Type: text/plain; charset=windows-1252; format

Re: FreeBSD 10.1: Intel dual port 10GbE card (82599EB) second port not present?

2015-02-20 Thread John Jasen
ented all four, as expected. As for a use case as to why someone would want this, building out systems before deployment comes to mind. Thanks! On 02/12/2015 11:11 AM, Jack Vogel wrote: > > > On Thu, Feb 12, 2015 at 6:30 AM, John Jasen <mailto:jja...@gmail.com>> wrote: > &

Re: FreeBSD 10.1: Intel dual port 10GbE card (82599EB) second port not present?

2015-02-21 Thread John Jasen
Well, oops. That would indeed explain the behavior. Thanks! On 02/20/2015 02:02 PM, Ryan Stone wrote: > I think that you might be a bit confused about the behaviour. An ix > port will only be missing if > > a) You have a non-Intel SFP+ installed > b) hw.ix.unsupported_sfp=1 is not set in loader.

quick question on carp(4)

2016-10-17 Thread John Jasen
I was reading on OpenBSD carp, where it has load-balancing capabilities versus failover. >From a cursory inspection, in the FreeBSD handbook, carp(4), or ifconfig(8) -- I didn't immediately see similar capabilities. Does FreeBSD carp support active/active load-balanced configurations? ___

bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

2017-03-12 Thread John Jasen
I think I am able to confirm Mr. Caraballo's findings. I pulled a Dell PowerEdge 720 out of production, and upgraded it to 11-RELEASE-p8. Currently, as in the R530, it has a single Chelsio T5-580, but has two v2 Intel E5-26xx CPUs versus the newer ones in the R530. Both ports are configured for

Re: bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

2017-03-12 Thread John Jasen
n 03/12/2017 07:18 PM, Slawa Olhovchenkov wrote: > On Sun, Mar 12, 2017 at 06:13:46PM -0400, John Jasen wrote: > > what traffic you generated (TCP? UDP? ICMP? other?), what reported in > dmesg | grep txq ? UDP traffic. dmesg reports 16 txq, 8 rxq -- which is the default

Re: bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

2017-03-13 Thread John Jasen
On 03/13/2017 01:03 PM, Navdeep Parhar wrote: > On Sun, Mar 12, 2017 at 5:35 PM, John Jasen wrote: >> UDP traffic. dmesg reports 16 txq, 8 rxq -- which is the default for >> Chelsio. >> > I don't recall offhand, but UDP might be using 2-tuple hashing by > d

Re: bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

2017-03-13 Thread John Jasen
The issue does not seem to be specific to Chelsio cards. The same tests with Mellanix cards using the mlx4 drivers exhibit similar behaviors and results. On 03/12/2017 06:13 PM, John Jasen wrote: > I think I am able to confirm Mr. Caraballo's

Re: bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

2017-03-16 Thread John Jasen
dropped, and 5 million passed. On Mon, Mar 13, 2017 at 1:31 PM, Navdeep Parhar wrote: > On Mon, Mar 13, 2017 at 10:13 AM, John Jasen wrote: > > On 03/13/2017 01:03 PM, Navdeep Parhar wrote: > > > >> On Sun, Mar 12, 2017 at 5:35 PM, John Jasen wrote: > >>>

Re: bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

2017-03-17 Thread John Jasen
On 03/17/2017 06:08 AM, Slawa Olhovchenkov wrote: > On Thu, Mar 16, 2017 at 03:50:42PM -0400, John Jasen wrote: > >> As a few points of note, partial resolution, and curiosity: >> >> Following down leads that 11-STABLE had tryforward improvements over >> 11-RELENG,

Re: bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

2017-03-17 Thread John Jasen
On 03/17/2017 03:32 PM, Navdeep Parhar wrote: > On Fri, Mar 17, 2017 at 12:21 PM, John Jasen wrote: >> Yes. >> We were hopeful, initially, to be able to achieve higher packet >> forwarding rates through either netmap-fwd or due to enhancements based >> o

Re: Error at the time of compiling netmap-fwd on -CURRENT

2017-03-27 Thread John Jasen
FreeBSD 11 and prior, dprintf in stdio.h had to be specifically called by setting particular defines. These were removed in -CURRENT, which caused dprintf from /usr/include/stdio.h to coflict with the declarations in netmap-fwd. Commenting out the #define and the int dprintf statements in netmap-f

Re: bad throughput performance on multiple systems: Re: Fwd: Re: Disappointing packets-per-second performance results on a Dell,PE R530

2017-03-27 Thread John Jasen
On 03/24/2017 08:51 PM, Navdeep Parhar wrote: > On 03/24/2017 16:53, Caraballo-vega, Jordan A. (GSFC-6062)[COMPUTER > SCIENCE CORP] wrote: >> It looks like netmap is there; however, is there a way of figuring out >> if netmap is being used? > > If you're not running netmap-fwd or some other netmap

state of packet forwarding in FreeBSD?

2017-06-14 Thread John Jasen
and we're stuck trying to get the interfaces online. -- John Jasen (jja...@gmail.com) ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

unexplained latency, interrupt spikes and loss of throughput on FreeBSD router/firewall system

2020-01-15 Thread John Jasen
Executive summary: Periodically, load will spike on network interrupts on one of our firewalls. Latency will quickly climb to the point that things are unresponsive, sessions will timeout, and bandwidth will plummet. We do not see increases in ethernet pause frames, drops, errors, or anything els

Re: unexplained latency, interrupt spikes and loss of throughput on FreeBSD router/firewall system

2020-01-15 Thread John Jasen
On Wed, Jan 15, 2020 at 5:24 PM Navdeep Parhar wrote: > On 1/15/20 6:55 AM, John Jasen wrote: > > Executive summary: > > > > Periodically, load will spike on network interrupts on one of our > > firewalls. Latency will quickly climb to the point that things are >

FreeBSD 11.3: Chelsio t5nex encountered fatal error

2020-03-16 Thread John Jasen
We use FreeBSD on our firewalls, relying on Chelsio T5 and T6 series cards for high performance networking. Friday night, our backup firewall went offline -- apparently taking both network cards out. DMESG reported the following error: t5nex0: ! PL_PERR_CAUSE 0x19404 = 0x0010, E

Chelsio cards, jumbo frames, memory fragmentation and performance in FreeBSD 13.x?

2021-12-03 Thread John Jasen
Pretty close to two years ago, we tripped across conditions where heavily used FreeBSD 11.x packet-filter firewalls would slow to a crawl and load would go crazy. In a fit of hopefulness with our upgrade to FreeBSD 13.0, I removed the hw.cxgbe.largest_rx_cluster settings we put in place -- only to