Hi,
I'm looking at the Supermicro X11SPH-nCTPF motherboard.
https://www.supermicro.com/en/products/motherboard/X11SPH-nCTPF
Especially because of its SFP+ LAN adapter.
Dual LAN with 10G SFP+ with Intel X722 + Inphi CS4227
(the PDF manual only shows Inphi CS4227)
I did not find anything
On 10 May 2018 10:36, Grzegorz Junka wrote:
Does it mean, that mlx4 is no longer compiled in the default kernel
It has never been, but I think modules have been added recently, I can
remember HPS did it :)
Can I just compile the mlx4/en/ib kernel module without having to compile
the
On 26 Apr 2018, Rick Macklem wrote:
Ryan Stone wrote:
On Tue, Apr 24, 2018 at 4:55 AM, Konstantin Belousov
>>wrote:
+#ifndef MLX5E_MAX_RX_BYTES
+#defineMLX5E_MAX_RX_BYTES MCLBYTES
+#endif
Why do you use a 2KB buffer rather than a PAGE_SIZE'd buffer?
On 24 Apr, Konstantin Belousov wrote:
Hello,
the patch below is of interest for people who use Mellanox Connect-X/4
and 5 ethernet adapters and configure it for jumbo frames.
Hi Konstantin,
Good news !
Do you think your work could be easily ported to Connect-X/3 devices (mlx4
driver) ?
On 09 Nov 2017 21:34, Lev Serebryakov wrote:
Mea culpa, it is 11-STABLE, r324811, amd64.
Sounds like you could be facing what I experienced a few weeks ago.
See this thread :
https://lists.freebsd.org/pipermail/freebsd-net/2017-August/048621.html
> On 07 Oct 2017, at 21:11, Lee Brown <l...@ratnaling.org> wrote:
>
> On Fri, Oct 6, 2017 at 1:45 PM, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
>>> On 06 Oct 2017, at 22:09, Lee Brown <l...@ratnaling.org> wrote:
>>>
>>> Hi All,
>>
Sounds like 3008 has both IT & IR firmwares, as the 2008 has.
I had a look in the past and sounded difficult to flash a Dell
SAS adapter with a LSI firmware, well, not so easy.
I however did not try it myself, so others may share their experience :)
Ben
> On 07 Oct 2017, at 22:48, Kevin Bowling
> On 06 Oct 2017, at 22:09, Lee Brown wrote:
>
> Hi All,
>
> I want to purchase a dell R330 system with a "SAS 12Gbps HBA External
> Controller", which I believe is this (pdf)
>
Hi Eugene,
cd /usr/src/sys/modules/mlx4
make
make install
make clean
cd /usr/src/sys/modules/mlxen
make
make install
make clean
Add this to /boot/loader.conf :
mlx4_load="YES"
mlxen_load="YES"
Ben
> On 27 Sep 2017, at 17:10, Eugene M. Zheganin wrote:
>
> Hi,
>
> I have
> On 22 Sep 2017, at 20:48, Ryan Stone wrote:
>
> Hans and I have proposed different approaches to the problem. I was
> taken off this issue at $WORK for a while, but coincidentally I just
> picked it up again in the last week or so. I'm working on evaluating
> the
> On 12 Jul 2017, at 01:02, Ryan Stone wrote:
>
> I've just put up a review that fixes mlx4_en to no longer use clusters larger
> than PAGE_SIZE in its receive path. The patch is based off of the older
> version of the driver which did the same, but keeps all of the changes
> On 28 Aug 2017, at 11:27, Julien Charbon <j...@freebsd.org> wrote:
>
> On 8/28/17 10:25 AM, Ben RUBSON wrote:
>>> On 16 Aug 2017, at 11:02, Ben RUBSON <ben.rub...@gmail.com> wrote:
>>>
>>>> On 15 Aug 2017, at 23:33, Julien Charbon <j...@f
> On 16 Aug 2017, at 11:02, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
>> On 15 Aug 2017, at 23:33, Julien Charbon <j...@freebsd.org> wrote:
>>
>> On 8/11/17 11:32 AM, Ben RUBSON wrote:
>>>> On 08 Aug 2017, at 13:33, Julien Charbon <j...@free
> On 15 Aug 2017, at 23:33, Julien Charbon <j...@freebsd.org> wrote:
>
> On 8/11/17 11:32 AM, Ben RUBSON wrote:
>>> On 08 Aug 2017, at 13:33, Julien Charbon <j...@freebsd.org> wrote:
>>>
>>> On 8/8/17 10:31 AM, Hans Petter Selasky wrote:
>&g
> On 08 Aug 2017, at 13:33, Julien Charbon wrote:
>
> Hi,
>
> On 8/8/17 10:31 AM, Hans Petter Selasky wrote:
>>
>>
>> Suggested fix attached.
>
> I agree we your conclusion. Just for the record, more precisely this
> regression seems to have been introduced with:
> (...)
> On 08 Aug 2017, at 10:31, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 08/08/17 10:06, Ben RUBSON wrote:
>>> On 08 Aug 2017, at 10:02, Hans Petter Selasky <h...@selasky.org> wrote:
>>>
>>> On 08/08/17 10:00, Ben RUBSON wro
> On 08 Aug 2017, at 09:54, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 08/08/17 09:43, Ben RUBSON wrote:
>> OK.
>> I'm quite (well, absolutely) new to kgdb, some clue on how I should proceed ?
>> Thank you !
>> Ben
>
> print twq_2msl
> On 08 Aug 2017, at 09:38, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 08/08/17 09:37, Ben RUBSON wrote:
>>> On 08 Aug 2017, at 09:33, Hans Petter Selasky <h...@selasky.org> wrote:
>>>
>>> On 08/08/17 09:04, Ben RUBSON wrot
> On 08 Aug 2017, at 09:38, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 08/08/17 09:37, Ben RUBSON wrote:
>>> On 08 Aug 2017, at 09:33, Hans Petter Selasky <h...@selasky.org> wrote:
>>>
>>> On 08/08/17 09:04, Ben RUBSON wrot
> On 08 Aug 2017, at 09:33, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 08/08/17 09:04, Ben RUBSON wrote:
>> "print V_twq_2msl" returns the following :
>> No symbol "V_twq_2msl" in current context.
>
> Are you using VI
> On 08 Aug 2017, at 08:51, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 08/08/17 01:52, Ben RUBSON wrote:
>>> On 07 Aug 2017, at 19:57, Hans Petter Selasky <h...@selasky.org> wrote:
>>>
>>> On 08/07/17 19:19, Ben RUBSON wrote:
> On 07 Aug 2017, at 19:57, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 08/07/17 19:19, Ben RUBSON wrote:
>>> On 07 Aug 2017, at 18:19, Matt Joras <mjo...@freebsd.org> wrote:
>>>
>>> On 08/07/2017 09:11, Hans Petter Selasky wrote:
> On 07 Aug 2017, at 18:19, Matt Joras wrote:
>
> On 08/07/2017 09:11, Hans Petter Selasky wrote:
>> Hi,
>>
>> Try to enter "kgdb" and run:
>>
>> thread apply all bt
>>
>> Look for the callout function in question.
>>
>> --HPS
>>
> If you don't have a way to attach kgdb
> On 04 Aug 2017, at 19:42, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
> Feel free to ask me whatever you need to investigate on this !
> I let this (production :/) server in this state to have a chance to get
> interesting traces.
Server no more in production, I moved s
> On 04 Aug 2017, at 19:45, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 08/04/17 19:42, Ben RUBSON wrote:
>>> On 04 Aug 2017, at 19:31, Hans Petter Selasky <h...@selasky.org> wrote:
>>>
>>> On 08/04/17 19:13, Ben RUBSON wrote:
&
> On 04 Aug 2017, at 19:02, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 08/04/17 18:59, Ben RUBSON wrote:
>> Hello,
>> Not sure this is the right list, but as it seems related to a mlx4en
>> device...
>> # vmstat -i 1
>> (...)
>>
Hello,
Not sure this is the right list, but as it seems related to a mlx4en device...
# vmstat -i 1
(...)
interrupt total rate
cpu23:timer 1198 1127
# top -P ALL
(...)
CPU 23: 0.0% user, 0.0% nice, 0.0% system, 100% interrupt,
> On 26 Jun 2017, at 15:13, Andrey V. Elsukov wrote:
>
> I think it is not mlxen specific problem, we have the same symptoms with
> ixgbe(4) driver too. To avoid the problem we have patches that are
> disable using of 9k mbufs, and instead only use 4k mbufs.
Another
> On 26 Jun 2017, at 15:36, Andrey V. Elsukov <bu7c...@yandex.ru> wrote:
>
> On 26.06.2017 16:29, Ben RUBSON wrote:
>>
>>> On 26 Jun 2017, at 15:25, Andrey V. Elsukov <bu7c...@yandex.ru> wrote:
>>>
>>> On 26.06.2017 16:27, Ben RUBSON
> On 26 Jun 2017, at 15:25, Andrey V. Elsukov <bu7c...@yandex.ru> wrote:
>
> On 26.06.2017 16:27, Ben RUBSON wrote:
>>
>>> On 26 Jun 2017, at 15:13, Andrey V. Elsukov <bu7c...@yandex.ru> wrote:
>>>
>>> I think it is not mlxen specific probl
> On 26 Jun 2017, at 15:13, Andrey V. Elsukov wrote:
>
> I think it is not mlxen specific problem, we have the same symptoms with
> ixgbe(4) driver too. To avoid the problem we have patches that are
> disable using of 9k mbufs, and instead only use 4k mbufs.
Interesting
> On 25 Jun 2017, at 17:32, Ryan Stone wrote:
>
> Having looking at the original email more closely, I see that you showed an
> mlxen interface with a 9020 MTU. Seeing allocation failures of 9k mbuf
> clusters increase while you are far below the zone's limit means that
> On 25 Jun 2017, at 17:14, Ryan Stone wrote:
>
> Is this setup using the mlx4_en driver? If so, recent versions of that
> driver has a regression when using MTUs greater than the page size (4096 on
> i386/amd64). The bug will cause the card to drop packets when the system
> On 30 Dec 2016, at 22:55, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
> Hello,
>
> 2 FreeBSD 11.0-p3 servers, one iSCSI initiator, one target.
> Both with Mellanox ConnectX-3 40G.
>
> Since a few days, sometimes, under undetermined circumstances, as soon as
&g
Hi,
Try disabling NUMA in your BIOS settings ?
I had perf issue on a 2-CPU (24 cores) server, I was not able to run a 40G NIC
at its max throughput.
We investigated a lot, disabling NUMA in the BIOS was the solution, as NUMA is
not fully supported yet (as of stable/11).
Ben
> On 28 Feb
> On 04 Jan 2017, at 14:47, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
>> On 03 Jan 2017, at 07:27, Meny Yossefi <me...@mellanox.com> wrote:
>>
>>> From: owner-freebsd-net@freebsd.orgOn Behalf OfBen RUBSON
>>> Sent: Monday, January 2,
>
> On Jan 15, 2017, at 08:22, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
>>> On 15 Jan 2017, at 13:33, wo...@4amlunch.net wrote:
>>>
>>> How good is FreeBSD's mellanox support?
>>
>> From my own experience with ConnectX-3 adapters, very good
> On 15 Jan 2017, at 13:33, wo...@4amlunch.net wrote:
>
> How good is FreeBSD's mellanox support?
>From my own experience with ConnectX-3 adapters, very good !
Ben
___
freebsd-net@freebsd.org mailing list
> On 03 Jan 2017, at 17:21, Julien Cigar wrote:
>
> is it not the same issue as PR 211990 ? can you try by turning off jumbo
> frames ?
Hi Julien,
With bug 211990 (https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=211990),
the main issue was that iSCSI disks did not
Hi Meny,
Thank you very much for your feedback.
I think you are right, this could be a mbufs issue.
Here are some more numbers :
# vmstat -z | grep -v "0, 0$"
ITEM SIZE LIMIT USED FREE REQ FAIL
SLEEP
4 Bucket:32, 0,2673,
Hello,
2 FreeBSD 11.0-p3 servers, one iSCSI initiator, one target.
Both with Mellanox ConnectX-3 40G.
Since a few days, sometimes, under undetermined circumstances, as soon as there
is some (very low) iSCSI traffic, some of the disks get disconnected :
kernel: WARNING: 192.168.2.2 (iqn..):
> On 29 Sep 2016, at 18:43, Hans Petter Selasky wrote:
>
> FYI:
> https://svnweb.freebsd.org/changeset/base/306453
> https://svnweb.freebsd.org/changeset/base/306454
Perfect, many thanks for your explanation and for the 2 commits !
Ben
> On 29 Sep 2016, at 11:09, Hans Petter Selasky wrote:
>
> Hi,
>
> Can you revert the previouis patch try the new attached patch on 10.x and
> 11.x. It should fix the issue.
HPS,
Good news, it fixes the issue, thank you very much !
What was the root-cause and how did you
On 28 Sep 2016, at 18:00, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 09/28/16 17:48, Ben RUBSON wrote:
>>
>>> On 28 Sep 2016, at 16:01, Hans Petter Selasky <h...@selasky.org> wrote:
>>>
>>> On 09/23/16 19:59, Ben RUBSON wrote:
>&
> On 28 Sep 2016, at 16:01, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 09/23/16 19:59, Ben RUBSON wrote:
>> netstat -b -I mlxen1
>
> Hi Ben,
>
> Does the attached patch make any difference?
>
> --HPS
>
Hi HPS,
Many thanks for your support
> On 23 Sep 2016, at 19:59, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
> Hello,
>
> I found a strange issue where input traffic is counted twice,
> sometimes more.
As a very disgusting & temporary workaround, I did the following,
so that statistics are made on th
> On 23 Sep 2016, at 20:44, Eugene Grosbein <eu...@grosbein.net> wrote:
>
> 24.09.2016 0:59, Ben RUBSON пишет:
>> Hello,
>>
>> I found a strange issue where input traffic is counted twice,
>> sometimes more.
>
> Generally, this points to accounting
Hello,
I found a strange issue where input traffic is counted twice,
sometimes more.
How to reproduce :
dst# netstat -b -I mlxen1
NameMtu Network IpktsIbytes OpktsObytes
mlxen 9000 223135371 2297323715986 242534891 1594979072449
mlxen -
> On 17 Aug 2016, at 17:38, Adrian Chadd wrote:
>
> [snip]
>
> ok, so this is what I was seeing when I was working on this stuff last.
>
> The big abusers are:
>
> * so_snd lock, for TX'ing producer/consumer socket data
> * tcp stack pcb locking (which rss tries to
> On 15 Aug 2016, at 16:49, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
>> On 12 Aug 2016, at 00:52, Adrian Chadd <adrian.ch...@gmail.com> wrote:
>>
>> Which ones of these hit the line rate comfortably?
>
> So Adrian, I ran tests again using FreeBSD
> On 16 Aug 2016, at 21:36, Adrian Chadd <adrian.ch...@gmail.com> wrote:
>
> On 16 August 2016 at 02:58, Ben RUBSON <ben.rub...@gmail.com> wrote:
>>
>>> On 16 Aug 2016, at 03:45, Adrian Chadd <adrian.ch...@gmail.com> wrote:
>>>
&
> On 16 Aug 2016, at 03:45, Adrian Chadd wrote:
>
> Hi,
>
> ok, can you try 5) but also running with the interrupt threads pinned to CPU
> 1?
What do you mean by interrupt threads ?
Perhaps you mean the NIC interrupts ?
In this case see 6) and 7) where NIC IRQs are
> On 12 Aug 2016, at 00:52, Adrian Chadd wrote:
>
> Which ones of these hit the line rate comfortably?
So Adrian, I ran tests again using FreeBSD 11-RC1.
I put iperf throughput in result files (so that we can classify them), as well
as top -P ALL and pcm-memory.x.
> On 11 Aug 2016, at 19:51, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
>
>> On 11 Aug 2016, at 18:36, Adrian Chadd <adrian.ch...@gmail.com> wrote:
>>
>> Hi!
>
> Hi Adrian,
>
>> mlx4_core0: mem
>> 0xfbe0-0xfbef,0xfb
> On 11 Aug 2016, at 00:11, Adrian Chadd wrote:
>
> hi,
>
> ok, lets start by getting the NUMA bits into the kernel so you can
> mess with things.
>
> add this to the kernel
>
> options MAXMEMDOM=8
> (which hopefully is enough)
> options VM_NUMA_ALLOC
> options
> On 10 Aug 2016, at 21:47, Adrian Chadd wrote:
>
> hi,
>
> yeah, I'd like you to do some further testing with NUMA. Are you able
> to run freebsd-11 or -HEAD on these boxes?
Hi Adrian,
Yes I currently have 11 BETA3 running on them.
I could also run BETA4.
Ben
> On 04 Aug 2016, at 11:40, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
>
>> On 02 Aug 2016, at 22:11, Ben RUBSON <ben.rub...@gmail.com> wrote:
>>
>>> On 02 Aug 2016, at 21:35, Hans Petter Selasky <h...@selasky.org> wrote:
>>>
>>&g
> On 05 Aug 2016, at 10:30, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 08/04/16 23:49, Ben RUBSON wrote:
>>>
>>> On 04 Aug 2016, at 20:15, Ryan Stone <ryst...@gmail.com> wrote:
>>>
>>> On Thu, Aug 4, 2016 at 11:33 AM, Ben RUBS
>
> On 04 Aug 2016, at 20:15, Ryan Stone <ryst...@gmail.com> wrote:
>
> On Thu, Aug 4, 2016 at 11:33 AM, Ben RUBSON <ben.rub...@gmail.com> wrote:
> But even without RSS, I should be able to go up to 2x40Gbps, don't you think
> so ?
> Nobody already did this ?
&g
> On 04 Aug 2016, at 20:15, Ryan Stone <ryst...@gmail.com> wrote:
>
> On Thu, Aug 4, 2016 at 11:33 AM, Ben RUBSON <ben.rub...@gmail.com> wrote:
> But even without RSS, I should be able to go up to 2x40Gbps, don't you think
> so ?
> Nobody already did this ?
>
> On 04 Aug 2016, at 20:15, Ryan Stone <ryst...@gmail.com> wrote:
>
> On Thu, Aug 4, 2016 at 11:33 AM, Ben RUBSON <ben.rub...@gmail.com> wrote:
> But even without RSS, I should be able to go up to 2x40Gbps, don't you think
> so ?
> Nobody already did this ?
>
> On 04 Aug 2016, at 17:33, Hans Petter Selasky <h...@selasky.org> wrote:
>
> On 08/04/16 17:24, Ben RUBSON wrote:
>>
>>> On 04 Aug 2016, at 11:40, Ben RUBSON <ben.rub...@gmail.com> wrote:
>>>
>>>> On 02 Aug 2016, at 22:11, Ben RUBSON
> On 04 Aug 2016, at 11:40, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
>> On 02 Aug 2016, at 22:11, Ben RUBSON <ben.rub...@gmail.com> wrote:
>>
>>> On 02 Aug 2016, at 21:35, Hans Petter Selasky <h...@selasky.org> wrote:
>>>
>>>
> On 02 Aug 2016, at 22:11, Ben RUBSON <ben.rub...@gmail.com> wrote:
>
>> On 02 Aug 2016, at 21:35, Hans Petter Selasky <h...@selasky.org> wrote:
>>
>> The CX-3 driver doesn't bind the worker threads to specific CPU cores by
>> default, so if your CP
> On 03 Aug 2016, at 20:02, Hans Petter Selasky wrote:
>
> The mlx4 send and receive queues have each their set of taskqueues. Look in
> output from "ps auxww".
I can't find them, I even unloaded/reloaded the driver in order to catch the
differences, but I did not found any
> On 02 Aug 2016, at 21:35, Hans Petter Selasky wrote:
>
> The CX-3 driver doesn't bind the worker threads to specific CPU cores by
> default, so if your CPU has more than one so-called numa, you'll end up that
> the bottle-neck is the high-speed link between the CPU cores
> On 03 Aug 2016, at 04:32, Eugene Grosbein wrote:
>
> If you have gateway_enable="YES" (sysctl net.inet.ip.forwarding=1)
> then try to disable this forwarding setting and rerun your tests to compare
> results.
Thank you Eugene for this, but net.inet.ip.forwarding is
> On 02 Aug 2016, at 21:35, Hans Petter Selasky wrote:
>
> Hi,
Thank you for your answer Hans Petter !
> The CX-3 driver doesn't bind the worker threads to specific CPU cores by
> default, so if your CPU has more than one so-called numa, you'll end up that
> the
Hello,
I'm trying to reach the 40Gb/s max throughtput between 2 hosts running a
ConnectX-3 Mellanox network adapter.
FreeBSD 10.3 just installed, last updates performed.
Network adapters running last firmwares / last drivers.
No workload at all, just iPerf as the benchmark tool.
### Step 1 :
I