Hi folks -- I opened this on Freebsd 13.1.
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=268246
I'm stumped, as I have about half a dozen other systems just like this one,
which do not exhibit this condition.
Don't know if it matters, but this is the backup firewall in a carp
configuration.
I've not checked 9.2, but the 181931 patch has been applied to
10-0-release, and it also fixes my problem.
On 04/19/2014 10:14 PM, lini...@freebsd.org wrote:
> Old Synopsis: mellenox drivers and vlan usage causes kernel panic and reboot
> New Synopsis: [ofed] [vlan] [panic] mellanox drivers and
vendors I should be considering?
If anyone else has tried 40GbE cards, I am most interested in your
experiences -- especially in stability, performance and performance tuning.
Thanks in advance!
-- John Jasen (jja...@gmail.com)
___
freebsd-net@freebs
Executive Summary:
Appropriate use of cpuset(1) can mitigate performance bottlenecks over
the Intel QPI processor interconnection, and improve packets-per-second
processing rate by over 100%.
Test Environment:
My test system is a Dell dual CPU R820, populated with evaluation cards
graciously pro
7 opackets 2910347 odrops 1943.65
On 07/21/2014 11:34 AM, John Jasen wrote:
> Executive Summary:
>
> Appropriate use of cpuset(1) can mitigate performance bottlenecks over
> the Intel QPI processor interconnection, and improve packets-per-second
> processing rate by over 100%.
>
Feedback and/or tips and tricks more than welcome.
Outstanding questions:
Would increasing the number of processor cores help?
Would a system where both processor QPI ports connect to each other
mitigate QPI bottlenecks?
Are there further performance optimizations I am missing?
Server Descript
On 07/22/2014 01:41 PM, John-Mark Gurney wrote:
> John Jasen wrote this message on Tue, Jul 22, 2014 at 11:18 -0400:
>> Feedback and/or tips and tricks more than welcome.
> You should look at netmap if you really want high PPS routing...
Originally, I assumed an interface supportin
On 07/24/2014 05:24 AM, Andrey V. Elsukov wrote:
> On 22.07.2014 19:18, John Jasen wrote:
>> Feedback and/or tips and tricks more than welcome.
>>
>> Outstanding questions:
>>
>> Would increasing the number of processor cores help?
> AFAIR, increasing th
on the transmit paths as drivers
> queue frames from one set of driver threads/queues to another
> potentially completely different set of driver transmit
> threads/queues.
>
>
>
>
> -a
>
>
> On 22 July 2014 08:18, John Jasen wrote:
>> Feedback and/or tips and trick
63952779 0 0 0
3439254 /usr/src/sys/netinet/ip_fastfwd.c:593 (sleep mutex:rtentry
On Tue, Jul 22, 2014 at 11:18 AM, John Jasen wrote:
> Feedback and/or tips and tricks more than welcome.
>
___
freebsd-net@freebsd.
kely can do better on the rtentry locking..)
>
>
> -a
>
>
> On 25 July 2014 13:51, Adrian Chadd wrote:
>> Ugh, the forwarding table stupidity. Try enabling FLOWTABLE as an option.
>>
>> I really dislike how the rtentry locking works. But that isn't a
>>
I shouldn't even be coming close to maxflows in this test scenario.
net.flowtable.enable: 1
net.flowtable.maxflows: 1042468
On 07/26/2014 10:20 PM, Adrian Chadd wrote:
> Flowtable is enabled? That's odd, it shouldn't be showing up like that.
>
>
>
> -a
>
>
>
_
wd for a test and see if the lock profile improves.
> (Set debug.lock.prof.reset=1 to clear the profiling data before you do it.)
>
>
> -a
>
>
> On 27 July 2014 05:58, John Jasen wrote:
> > I shouldn't even be coming close to maxflows in this test scenario.
> >
.5.1 netmask 255.255.255.0 mtu 9000 -lro -tso up"
ifconfig_cxl3="inet 172.16.6.1 netmask 255.255.255.0 mtu 9000 -lro -tso up"
ifconfig_cxl0_alias0="inet 172.16.7.1 netmask 255.255.255.0"
ifconfig_cxl1_alias0="inet 172.16.8.1 netmask 255.255.255.0"
ifconfig_cxl2_alias0=&quo
Is there a complete idiot's guide to netmap, that I've not stumbled upon
as of yet?
I'm interested in trying to cook up a router/firewall leveraging netmap,
but I'm stuck as to how to use it.
For example, the cards I'm using, Chelsio 40GbE adapters, create
ncxl$number virtual interfaces when net
I have several servers that have two Intel 10GbE ports on board. They're
technically Dell daughterboards which have two Intel 1GbE and two 10GbE
ports.
However, the second ix interface is not accessible, and does not seem to
be available. From a brief look, it looks like ix0 and both igb
interface
> Date: Wed, 11 Feb 2015 20:47:15 +
> From: Steven Hartland
> To: freebsd-net@freebsd.org
> Subject: Re: FreeBSD 10.1: Intel dual port 10GbE card (82599EB):
> second port not present?
> Message-ID: <54dbbfd3.7010...@multiplay.co.uk>
> Content-Type: text/plain; charset=windows-1252; format
ented all four, as expected.
As for a use case as to why someone would want this, building out
systems before deployment comes to mind.
Thanks!
On 02/12/2015 11:11 AM, Jack Vogel wrote:
>
>
> On Thu, Feb 12, 2015 at 6:30 AM, John Jasen <mailto:jja...@gmail.com>> wrote:
>
&
Well, oops. That would indeed explain the behavior.
Thanks!
On 02/20/2015 02:02 PM, Ryan Stone wrote:
> I think that you might be a bit confused about the behaviour. An ix
> port will only be missing if
>
> a) You have a non-Intel SFP+ installed
> b) hw.ix.unsupported_sfp=1 is not set in loader.
I was reading on OpenBSD carp, where it has load-balancing capabilities
versus failover.
>From a cursory inspection, in the FreeBSD handbook, carp(4), or
ifconfig(8) -- I didn't immediately see similar capabilities.
Does FreeBSD carp support active/active load-balanced configurations?
___
I think I am able to confirm Mr. Caraballo's findings.
I pulled a Dell PowerEdge 720 out of production, and upgraded it to
11-RELEASE-p8.
Currently, as in the R530, it has a single Chelsio T5-580, but has two
v2 Intel E5-26xx CPUs versus the newer ones in the R530.
Both ports are configured for
n 03/12/2017 07:18 PM, Slawa Olhovchenkov wrote:
> On Sun, Mar 12, 2017 at 06:13:46PM -0400, John Jasen wrote:
>
> what traffic you generated (TCP? UDP? ICMP? other?), what reported in
> dmesg | grep txq ?
UDP traffic. dmesg reports 16 txq, 8 rxq -- which is the default
On 03/13/2017 01:03 PM, Navdeep Parhar wrote:
> On Sun, Mar 12, 2017 at 5:35 PM, John Jasen wrote:
>> UDP traffic. dmesg reports 16 txq, 8 rxq -- which is the default for
>> Chelsio.
>>
> I don't recall offhand, but UDP might be using 2-tuple hashing by
> d
The issue does not seem to be specific to Chelsio cards. The same tests
with Mellanix cards using the mlx4 drivers exhibit similar behaviors and
results.
On 03/12/2017 06:13 PM, John Jasen wrote:
> I think I am able to confirm Mr. Caraballo's
dropped, and 5 million passed.
On Mon, Mar 13, 2017 at 1:31 PM, Navdeep Parhar wrote:
> On Mon, Mar 13, 2017 at 10:13 AM, John Jasen wrote:
> > On 03/13/2017 01:03 PM, Navdeep Parhar wrote:
> >
> >> On Sun, Mar 12, 2017 at 5:35 PM, John Jasen wrote:
> >>>
On 03/17/2017 06:08 AM, Slawa Olhovchenkov wrote:
> On Thu, Mar 16, 2017 at 03:50:42PM -0400, John Jasen wrote:
>
>> As a few points of note, partial resolution, and curiosity:
>>
>> Following down leads that 11-STABLE had tryforward improvements over
>> 11-RELENG,
On 03/17/2017 03:32 PM, Navdeep Parhar wrote:
> On Fri, Mar 17, 2017 at 12:21 PM, John Jasen wrote:
>> Yes.
>> We were hopeful, initially, to be able to achieve higher packet
>> forwarding rates through either netmap-fwd or due to enhancements based
>> o
FreeBSD 11 and prior, dprintf in stdio.h had to be specifically called
by setting particular defines.
These were removed in -CURRENT, which caused dprintf from
/usr/include/stdio.h to coflict with the declarations in netmap-fwd.
Commenting out the #define and the int dprintf statements in
netmap-f
On 03/24/2017 08:51 PM, Navdeep Parhar wrote:
> On 03/24/2017 16:53, Caraballo-vega, Jordan A. (GSFC-6062)[COMPUTER
> SCIENCE CORP] wrote:
>> It looks like netmap is there; however, is there a way of figuring out
>> if netmap is being used?
>
> If you're not running netmap-fwd or some other netmap
and we're stuck trying to get the interfaces online.
-- John Jasen (jja...@gmail.com)
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Executive summary:
Periodically, load will spike on network interrupts on one of our
firewalls. Latency will quickly climb to the point that things are
unresponsive, sessions will timeout, and bandwidth will plummet.
We do not see increases in ethernet pause frames, drops, errors, or
anything els
On Wed, Jan 15, 2020 at 5:24 PM Navdeep Parhar wrote:
> On 1/15/20 6:55 AM, John Jasen wrote:
> > Executive summary:
> >
> > Periodically, load will spike on network interrupts on one of our
> > firewalls. Latency will quickly climb to the point that things are
>
We use FreeBSD on our firewalls, relying on Chelsio T5 and T6 series cards
for high performance networking.
Friday night, our backup firewall went offline -- apparently taking both
network cards out.
DMESG reported the following error:
t5nex0: ! PL_PERR_CAUSE 0x19404 = 0x0010, E
Pretty close to two years ago, we tripped across conditions where heavily
used FreeBSD 11.x packet-filter firewalls would slow to a crawl and load
would go crazy.
In a fit of hopefulness with our upgrade to FreeBSD 13.0, I removed the
hw.cxgbe.largest_rx_cluster settings we put in place -- only to
34 matches
Mail list logo