[Bug 270964] iflib/ice(4): invalid sized packet sent via netmap triggers MDD

2023-04-23 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=270964

Mark Linimon  changed:

   What|Removed |Added

   Keywords||IntelNetworking
   Assignee|b...@freebsd.org|n...@freebsd.org

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 240023] netmap lb pointer out of bounds on ixgbe

2023-02-03 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240023

Piotr Kubaj  changed:

   What|Removed |Added

 Status|New |In Progress
   Assignee|n...@freebsd.org |free...@intel.com
 CC||pku...@freebsd.org

--- Comment #7 from Piotr Kubaj  ---
Does this issue happen on the currently supported FreeBSD versions? Note that
FreeBSD 11 is not supported anymore.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 260427] netmap causes packet drops

2021-12-23 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=260427

e...@norma.perm.ru changed:

   What|Removed |Added

 Resolution|--- |Overcome By Events
 Status|New |Closed

--- Comment #1 from e...@norma.perm.ru ---
Closed as misdiagnosed.

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 260427] netmap causes packet drops

2021-12-15 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=260427

Mark Linimon  changed:

   What|Removed |Added

   Keywords||regression
   Assignee|b...@freebsd.org|n...@freebsd.org
Summary|[regression]: netmap causes |netmap causes packet drops
   |packet drops|

-- 
You are receiving this mail because:
You are the assignee for the bug.


Re: e1000 & igb if_vlan netmap header stripping issue after e1000-igb driver updates.

2021-11-28 Thread Vincenzo Maffione
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=260068

On Sat, Nov 20, 2021, 3:19 PM Özkan KIRIK  wrote:

> Hello,
>
> I'm using stable/12 (aba2dc46dfa5, Oct 24 2021). I'm hitting some
> problems with if_vlan + parent interface netmap. It was working with
> before driver update. Maybe something missing for netmap
> implementation.
>
> The way to reproduce:
> [HostA] <> [HostB]
>
> HostA
> - ifconfig em1.110 create 10.10.10.2/24 up
> - ping 10.10.10.1
> - tcpdump -eni em1
> 17:05:11.393411 00:50:56:26:69:ea > 00:0c:29:84:5d:88, ethertype
> 802.1Q (0x8100), length 102: vlan 110, p 0, ethertype IPv4, 10.10.10.1
> > 10.10.10.2: ICMP echo reply, id 32844, seq 53, length 64
>
> HostB
> - ifconfig em1.110 create 10.10.10.1/24 up
> - ifconfig em1 promisc -tso -lro -rxcsum -txcsum -tso6 -rxcsum -txcsum
> -tso6 -rxcsum6 -txcsum6 -vlanhwtag -vlanhwcsum -vlanhwtso
> - ./bridge -i em1 -i em1^ &
> # tcpdump -eni em1
> 17:05:11.391215 00:0c:29:84:5d:88 > 00:50:56:26:69:ea, ethertype IPv4
> (0x0800), length 98: 10.10.10.2 > 10.10.10.1: ICMP echo request, id
> 32844, seq 53, length 64
>
> Pinging from HostA to HostB through if_vlan. When netmap bridge is
> closed, everything is okey, we can see the original packet on tcpdump.
> But when netmap bridge is started, packet's vlan header was lost as
> you can see above. The netmap bridge app is the original
> tools/tools/netmap/bridge.c application.
> HostA and HostB connected back to back directly with a patch cable.
> There is no switch between them.
>
> I tried this test on real hardware em, igb and vmware e1000 (em) nics.
> Problem is easy to reproduce.
> But there is no such problem on ix and ixl cards.
>
> Is it possible to check and fix?
> Best Regards,
> Özkan KIRIK
>


e1000 & igb if_vlan netmap header stripping issue after e1000-igb driver updates.

2021-11-20 Thread Özkan KIRIK
Hello,

I'm using stable/12 (aba2dc46dfa5, Oct 24 2021). I'm hitting some
problems with if_vlan + parent interface netmap. It was working with
before driver update. Maybe something missing for netmap
implementation.

The way to reproduce:
[HostA] <> [HostB]

HostA
- ifconfig em1.110 create 10.10.10.2/24 up
- ping 10.10.10.1
- tcpdump -eni em1
17:05:11.393411 00:50:56:26:69:ea > 00:0c:29:84:5d:88, ethertype
802.1Q (0x8100), length 102: vlan 110, p 0, ethertype IPv4, 10.10.10.1
> 10.10.10.2: ICMP echo reply, id 32844, seq 53, length 64

HostB
- ifconfig em1.110 create 10.10.10.1/24 up
- ifconfig em1 promisc -tso -lro -rxcsum -txcsum -tso6 -rxcsum -txcsum
-tso6 -rxcsum6 -txcsum6 -vlanhwtag -vlanhwcsum -vlanhwtso
- ./bridge -i em1 -i em1^ &
# tcpdump -eni em1
17:05:11.391215 00:0c:29:84:5d:88 > 00:50:56:26:69:ea, ethertype IPv4
(0x0800), length 98: 10.10.10.2 > 10.10.10.1: ICMP echo request, id
32844, seq 53, length 64

Pinging from HostA to HostB through if_vlan. When netmap bridge is
closed, everything is okey, we can see the original packet on tcpdump.
But when netmap bridge is started, packet's vlan header was lost as
you can see above. The netmap bridge app is the original
tools/tools/netmap/bridge.c application.
HostA and HostB connected back to back directly with a patch cable.
There is no switch between them.

I tried this test on real hardware em, igb and vmware e1000 (em) nics.
Problem is easy to reproduce.
But there is no such problem on ix and ixl cards.

Is it possible to check and fix?
Best Regards,
Özkan KIRIK



[Bug 230465] ixl: not working in netmap mode

2021-06-21 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230465

Mark Linimon  changed:

   What|Removed |Added

   Assignee|b...@freebsd.org|n...@freebsd.org

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.


[Bug 230465] ixl: not working in netmap mode

2021-05-26 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230465

--- Comment #47 from Vincenzo Maffione  ---
What is the state of the TX ring (head, cur, tail) when stalling?

-- 
You are receiving this mail because:
You are on the CC list for the bug.


[Bug 230465] ixl: not working in netmap mode

2021-05-23 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230465

--- Comment #46 from strongs...@nanoteq.com ---
(In reply to strongswan from comment #45)
I did some more testing, and even with the setting hw.ixl.enable_head_writeback
= 0 I still get to a situation where no packets are transmitted.
There is however a much longer interval between when the issues are occurring.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


[Bug 230465] ixl: not working in netmap mode

2021-05-18 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230465

--- Comment #44 from Vincenzo Maffione  ---
What if you set
hw.ixl.enable_head_writeback = 0
in /boot/loader.conf and reboot?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 230465] ixl: not working in netmap mode

2021-05-18 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230465

s...@zxy.spb.ru changed:

   What|Removed |Added

 CC||s...@zxy.spb.ru

--- Comment #43 from s...@zxy.spb.ru ---
(In reply to Vincenzo Maffione from comment #42)

Looks like netmap don't worked:

# /usr/obj/usr/src/amd64.amd64/tools/tools/netmap/pkt-gen -i ixl1 -f tx
321.990539 main [2921] interface is ixl1
321.990568 main [3044] using default burst size: 512
321.990573 main [3052] running on 1 cpus (have 24)
321.990640 extract_ip_range [476] range is 10.0.0.1:1234 to 10.0.0.1:1234
321.990645 extract_ip_range [476] range is 10.1.0.1:1234 to 10.1.0.1:1234
Sending on netmap:ixl1: 5 queues, 1 threads and 1 cpus.
10.0.0.1 -> 10.1.0.1 (00:00:00:00:00:00 -> ff:ff:ff:ff:ff:ff)
322.096770 main [3255] Sending 512 packets every  0.0 s
322.096813 start_threads [2580] Wait 2 secs for phy reset
324.99 start_threads [2582] Ready...
324.222365 sender_body [1599] start, fd 3 main_fd 3
324.222392 sender_body [1657] frags 1 frag_size 60
324.234391 sender_body [1695] drop copy
325.285776 main_thread [2671] 2.794 Mpps (2.971 Mpkts 1.341 Gbps in 1063411
usec) 15.05 avg_batch 0 min_space
326.348859 main_thread [2671] 0.000 pps (0.000 pkts 0.000 bps in 1063084 usec)
0.00 avg_batch 9 min_space
326.472386 sender_body [1682] poll error on queue 0: timeout
327.411859 main_thread [2671] 0.000 pps (0.000 pkts 0.000 bps in 1063000 usec)
0.00 avg_batch 9 min_space
328.473456 sender_body [1682] poll error on queue 0: timeout
328.474874 main_thread [2671] 0.000 pps (0.000 pkts 0.000 bps in 1063015 usec)
0.00 avg_batch 9 min_space
329.537820 main_thread [2671] 0.000 pps (0.000 pkts 0.000 bps in 1062945 usec)
0.00 avg_batch 9 min_space
330.474386 sender_body [1682] poll error on queue 0: timeout
330.600771 main_thread [2671] 0.000 pps (0.000 pkts 0.000 bps in 1062951 usec)
0.00 avg_batch 9 min_space
331.663860 main_thread [2671] 0.000 pps (0.000 pkts 0.000 bps in 1063090 usec)
0.00 avg_batch 9 min_space
332.475381 sender_body [1682] poll error on queue 0: timeout
332.726861 main_thread [2671] 0.000 pps (0.000 pkts 0.000 bps in 1063001 usec)
0.00 avg_batch 9 min_space
^C333.671467 sigint_h [573] received control-C on thread 0x800a12000
333.671475 main_thread [2671] 0.000 pps (0.000 pkts 0.000 bps in 944614 usec)
0.00 avg_batch 9 min_space
334.476434 sender_body [1737] flush tail 576 head 576 on thread 0x800a12700
334.734834 main_thread [2671] 0.000 pps (0.000 pkts 0.000 bps in 1063359 usec)
0.00 avg_batch 9 min_space
Sent 2971392 packets 178283520 bytes 197414 events 60 bytes each in 10.25
seconds.
Speed: 289.777 Kpps Bandwidth: 139.093 Mbps (raw 139.093 Mbps). Average batch:
15.05 pkts


Additional, in my application I am see logical errors from kernel:

I a send 3 packets in ring 0, c/h/t is 3/3/2047
do NIOCTXSYNC, c/h/t is 3/3/0
do not send any packets, just do NIOCTXSYNC, c/h/t is 3/3/3 now!
i.e. like TX ring is full and stalled. Any transmission staled after this.

13-stable.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 255671] ixl(4) netmap pkt-gen stops transmitting

2021-05-10 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=255671

--- Comment #2 from Brian Poole  ---
Hello Ozkan,

Thank you very much for your comment! I have not seen any failures in tx/rx or
tx/tx configurations since setting hw.ixl.enable_head_writeback=0. Now that I
know what to search for, I found multiple posts suggesting setting that sysctl
to zero for better stability. I agree it would be preferable to not require
manual adjustments to ixl.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 255671] ixl(4) netmap pkt-gen stops transmitting

2021-05-10 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=255671

Mark Linimon  changed:

   What|Removed |Added

   Keywords||IntelNetworking

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 255671] ixl(4) netmap pkt-gen stops transmitting

2021-05-08 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=255671

Ozkan KIRIK  changed:

   What|Removed |Added

 CC||ozkan.ki...@gmail.com

--- Comment #1 from Ozkan KIRIK  ---
Hello,

I hit same problem before. We have discussed this issue with Vincenzo Maffione. 
He told me that ixl driver has issues but setting
hw.ixl.enable_head_writeback=0
on loader.conf helps.
By setting this loader tunable I solved my problem.

I hope it helps to you also.

It will be good if this issue could be solved within ixl driver.

Regads
Ozkan

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 255671] ixl(4) netmap pkt-gen stops transmitting

2021-05-08 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=255671

Lutz Donnerhacke  changed:

   What|Removed |Added

   Assignee|b...@freebsd.org|n...@freebsd.org

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ixl netmap TX queue remains full

2021-03-31 Thread Vincenzo Maffione
Hi Özkan,
  I'm glad that worked.
Nevertheless, there must be an issue lurking around in the ixl driver code,
affecting the case enable_head_writeback==1.
It may be related to the fact that https://reviews.freebsd.org/D26896
causes issues, even though it looks a legitimate change.

Cheers,
  Vincenzo

Il giorno mar 30 mar 2021 alle ore 08:56 Özkan KIRIK 
ha scritto:

> Hello Vincenzo,
>
> Before your email, hw.ixl.enable_head_writeback = 1. After your
> suggestion, i set the hw.ixl.enable_head_writeback = 0. then it works
> properly.
>
> Thank you so much
>
> Cheers
> Özkan
>
> On Tue, Mar 30, 2021 at 9:22 AM Vincenzo Maffione 
> wrote:
>
>> Hi,
>>   Could this be related to
>> https://reviews.freebsd.org/D26896?
>>
>> Moreover, what happens if you switch the enable_head_writeback sysctl?
>>
>> Cheers,
>>   Vincenzo
>>
>> Il giorno lun 29 mar 2021 alle ore 10:36 Özkan KIRIK <
>> ozkan.ki...@gmail.com> ha scritto:
>>
>>> Hello,
>>>
>>> I hit problems about ixl driver's netmap support. I have no problems with
>>> ixgbe.
>>> The problem is tested with FreeBSD 12.2-p5 and FreeBSD 13.0-RC3.
>>>
>>> ixl in netmap mode, it works with low throughput (about 2 Gbps) for 20-30
>>> seconds. And then TX queue remains full. poll with POLLOUT and even
>>> ioctl(fd, NIOCTXSYNC) does not work. So that nic stops working.
>>>
>>> Same netmap software with ixgbe has no problems.
>>>
>>> pciconf -lv output:
>>> ixl0@pci0:183:0:0: class=0x02 card=0x37d215d9 chip=0x37d28086
>>> rev=0x04
>>> hdr=0x00
>>> vendor = 'Intel Corporation'
>>> device = 'Ethernet Connection X722 for 10GBASE-T'
>>> class  = network
>>> subclass   = ethernet
>>> ixl1@pci0:183:0:1: class=0x02 card=0x37d215d9 chip=0x37d28086
>>> rev=0x04
>>> hdr=0x00
>>> vendor = 'Intel Corporation'
>>> device = 'Ethernet Connection X722 for 10GBASE-T'
>>> class  = network
>>> subclass   = ethernet
>>> ixl2@pci0:183:0:2: class=0x02 card=0x37d015d9 chip=0x37d08086
>>> rev=0x04
>>> hdr=0x00
>>> vendor = 'Intel Corporation'
>>> device = 'Ethernet Connection X722 for 10GbE SFP+'
>>> class  = network
>>> subclass   = ethernet
>>> ixl3@pci0:183:0:3: class=0x02 card=0x37d015d9 chip=0x37d08086
>>> rev=0x04
>>> hdr=0x00
>>> vendor = 'Intel Corporation'
>>> device = 'Ethernet Connection X722 for 10GbE SFP+'
>>> class  = network
>>> subclass   = ethernet
>>>
>>> Best regards
>>> ___
>>> freebsd-net@freebsd.org mailing list
>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net
>>> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>>>
>>
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ixl netmap TX queue remains full

2021-03-30 Thread Özkan KIRIK
Hello Vincenzo,

Before your email, hw.ixl.enable_head_writeback = 1. After your suggestion,
i set the hw.ixl.enable_head_writeback = 0. then it works properly.

Thank you so much

Cheers
Özkan

On Tue, Mar 30, 2021 at 9:22 AM Vincenzo Maffione 
wrote:

> Hi,
>   Could this be related to
> https://reviews.freebsd.org/D26896?
>
> Moreover, what happens if you switch the enable_head_writeback sysctl?
>
> Cheers,
>   Vincenzo
>
> Il giorno lun 29 mar 2021 alle ore 10:36 Özkan KIRIK <
> ozkan.ki...@gmail.com> ha scritto:
>
>> Hello,
>>
>> I hit problems about ixl driver's netmap support. I have no problems with
>> ixgbe.
>> The problem is tested with FreeBSD 12.2-p5 and FreeBSD 13.0-RC3.
>>
>> ixl in netmap mode, it works with low throughput (about 2 Gbps) for 20-30
>> seconds. And then TX queue remains full. poll with POLLOUT and even
>> ioctl(fd, NIOCTXSYNC) does not work. So that nic stops working.
>>
>> Same netmap software with ixgbe has no problems.
>>
>> pciconf -lv output:
>> ixl0@pci0:183:0:0: class=0x02 card=0x37d215d9 chip=0x37d28086
>> rev=0x04
>> hdr=0x00
>> vendor = 'Intel Corporation'
>> device = 'Ethernet Connection X722 for 10GBASE-T'
>> class  = network
>> subclass   = ethernet
>> ixl1@pci0:183:0:1: class=0x02 card=0x37d215d9 chip=0x37d28086
>> rev=0x04
>> hdr=0x00
>> vendor = 'Intel Corporation'
>> device = 'Ethernet Connection X722 for 10GBASE-T'
>> class  = network
>> subclass   = ethernet
>> ixl2@pci0:183:0:2: class=0x02 card=0x37d015d9 chip=0x37d08086
>> rev=0x04
>> hdr=0x00
>> vendor = 'Intel Corporation'
>> device = 'Ethernet Connection X722 for 10GbE SFP+'
>> class  = network
>> subclass   = ethernet
>> ixl3@pci0:183:0:3: class=0x02 card=0x37d015d9 chip=0x37d08086
>> rev=0x04
>> hdr=0x00
>> vendor = 'Intel Corporation'
>> device = 'Ethernet Connection X722 for 10GbE SFP+'
>> class  = network
>> subclass   = ethernet
>>
>> Best regards
>> ___
>> freebsd-net@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-net
>> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>>
>
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ixl netmap TX queue remains full

2021-03-30 Thread Vincenzo Maffione
Hi,
  Could this be related to
https://reviews.freebsd.org/D26896?

Moreover, what happens if you switch the enable_head_writeback sysctl?

Cheers,
  Vincenzo

Il giorno lun 29 mar 2021 alle ore 10:36 Özkan KIRIK 
ha scritto:

> Hello,
>
> I hit problems about ixl driver's netmap support. I have no problems with
> ixgbe.
> The problem is tested with FreeBSD 12.2-p5 and FreeBSD 13.0-RC3.
>
> ixl in netmap mode, it works with low throughput (about 2 Gbps) for 20-30
> seconds. And then TX queue remains full. poll with POLLOUT and even
> ioctl(fd, NIOCTXSYNC) does not work. So that nic stops working.
>
> Same netmap software with ixgbe has no problems.
>
> pciconf -lv output:
> ixl0@pci0:183:0:0: class=0x02 card=0x37d215d9 chip=0x37d28086 rev=0x04
> hdr=0x00
> vendor = 'Intel Corporation'
> device = 'Ethernet Connection X722 for 10GBASE-T'
> class  = network
> subclass   = ethernet
> ixl1@pci0:183:0:1: class=0x02 card=0x37d215d9 chip=0x37d28086 rev=0x04
> hdr=0x00
> vendor = 'Intel Corporation'
> device = 'Ethernet Connection X722 for 10GBASE-T'
> class  = network
> subclass   = ethernet
> ixl2@pci0:183:0:2: class=0x02 card=0x37d015d9 chip=0x37d08086 rev=0x04
> hdr=0x00
> vendor = 'Intel Corporation'
> device = 'Ethernet Connection X722 for 10GbE SFP+'
> class  = network
> subclass   = ethernet
> ixl3@pci0:183:0:3: class=0x02 card=0x37d015d9 chip=0x37d08086 rev=0x04
> hdr=0x00
> vendor = 'Intel Corporation'
> device = 'Ethernet Connection X722 for 10GbE SFP+'
> class  = network
> subclass   = ethernet
>
> Best regards
> ___
> freebsd-net@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


ixl netmap TX queue remains full

2021-03-29 Thread Özkan KIRIK
Hello,

I hit problems about ixl driver's netmap support. I have no problems with
ixgbe.
The problem is tested with FreeBSD 12.2-p5 and FreeBSD 13.0-RC3.

ixl in netmap mode, it works with low throughput (about 2 Gbps) for 20-30
seconds. And then TX queue remains full. poll with POLLOUT and even
ioctl(fd, NIOCTXSYNC) does not work. So that nic stops working.

Same netmap software with ixgbe has no problems.

pciconf -lv output:
ixl0@pci0:183:0:0: class=0x02 card=0x37d215d9 chip=0x37d28086 rev=0x04
hdr=0x00
vendor = 'Intel Corporation'
device = 'Ethernet Connection X722 for 10GBASE-T'
class  = network
subclass   = ethernet
ixl1@pci0:183:0:1: class=0x02 card=0x37d215d9 chip=0x37d28086 rev=0x04
hdr=0x00
vendor = 'Intel Corporation'
device = 'Ethernet Connection X722 for 10GBASE-T'
class  = network
subclass   = ethernet
ixl2@pci0:183:0:2: class=0x02 card=0x37d015d9 chip=0x37d08086 rev=0x04
hdr=0x00
vendor = 'Intel Corporation'
device = 'Ethernet Connection X722 for 10GbE SFP+'
class  = network
subclass   = ethernet
ixl3@pci0:183:0:3: class=0x02 card=0x37d015d9 chip=0x37d08086 rev=0x04
hdr=0x00
vendor = 'Intel Corporation'
device = 'Ethernet Connection X722 for 10GbE SFP+'
class  = network
subclass   = ethernet

Best regards
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 230465] ixl: not working in netmap mode

2021-02-24 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230465

--- Comment #42 from Vincenzo Maffione  ---
(In reply to Charles Goncalves from comment #41)
I don't have a test environment either.
But since ixl uses iflib on 12.x and 13.x, I expected this issue has gone away.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 230465] ixl: not working in netmap mode

2021-02-23 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230465

--- Comment #41 from Charles Goncalves  ---
(In reply to Vincenzo Maffione from comment #40)
Hello Vincenzo!

This issue is present on FreeBSD 12.2 or 13.0? I don't have a test environment
right now, but I will upgrade a production router from 12.1 to 12.2 and to 13.0
in the following months then I can test it.

This router has a ixl NIC (chip=0x15838086 Ethernet Controller XL710 for 40GbE
QSFP+)

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 230465] ixl: not working in netmap mode

2021-02-23 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230465

Charles Goncalves  changed:

   What|Removed |Added

Version|11.2-STABLE |12.1-STABLE

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Netmap Library not getting installed on custom kernel installation

2020-12-17 Thread Olivier Cochard-Labbé
On Wed, Dec 16, 2020 at 9:53 PM Vincenzo Maffione 
wrote:

>
> On a side note, the netmap tools (pkt-gen, bridge, lb, etc.) should really
> be a port. Another TODO item.
>
>
There is already one port for an old version of pkt-gen:
https://svnweb.freebsd.org/ports/head/net/pkt-gen/

And here is a custom port's patch to upgrade this port to a not-so-old
version including a quick range bug fix (cleaner version
fresh-from-today official netmap github), and adding a new option to
prevent doing software IP & UDP checksum by default (because it consumes a
lot of ressource on 40G or 100G NIC and Chelsio NIC are able to do hardware
checksum in netmap mode):
https://github.com/ocochard/BSDRP/blob/master/BSDRP/patches/ports.pkt-gen.patch

Regards,

Olivier
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Netmap Library not getting installed on custom kernel installation

2020-12-16 Thread Rajesh Kumar
Hi,

Got around this issue by manually copying necessary in /usr/obj

Looks like libnetmap (src/lib/libnetmap) is not built and installed by
default. Manually built them and copied the "libnetmap.h" and
"libnetmap.so" (not just the header file) to appropriate directories in
/usr/obj helps to get past the issue.  But not sure why libnetmap is not
built and installed though "device netmap" is set in the config file.

Manually copying may not be the right approach. Can anyone suggest a
cleaner way of getting things done?

Thanks,
Rajesh.

On Wed, Dec 16, 2020 at 3:33 PM Rajesh Kumar  wrote:

> Hi,
>
> I am trying to compile the netmap tools(pkt-gen, bridge etc.,) and getting
> the below error.
>
>
>
>
>
> */root//freebsd/tools/tools/netmap/pkt-gen.c:47:10: fatal error:
> 'libnetmap.h' file not found#include  ^1
> error generated.*** Error code 1*
>
> On debug, I don't see the libnetmap.h file getting installed in the
> /usr/obj/ directory.  Whereas, in another similar machine I have the file
> in /usr/obj and compilation of netmap tools goes fine.  For the test, I
> just copied libnetmap.h from the source and it leads to linker error. So
> seems the libnetmap library is not installed properly.
>
> I installed a custom kernel just disabling the debug options with the
> FreeBSD-CURRENT branch.  After rebooting to the custom kernel, I tried to
> compile the netmap tools (with some changes) and ran into the above error.
>
> How to get the netmap tools compiled in this scenario? Am i missing
> something?
>
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Netmap Library not getting installed on custom kernel installation

2020-12-16 Thread Rajesh Kumar
Hi,

I am trying to compile the netmap tools(pkt-gen, bridge etc.,) and getting
the below error.





*/root//freebsd/tools/tools/netmap/pkt-gen.c:47:10: fatal error:
'libnetmap.h' file not found#include  ^1
error generated.*** Error code 1*

On debug, I don't see the libnetmap.h file getting installed in the
/usr/obj/ directory.  Whereas, in another similar machine I have the file
in /usr/obj and compilation of netmap tools goes fine.  For the test, I
just copied libnetmap.h from the source and it leads to linker error. So
seems the libnetmap library is not installed properly.

I installed a custom kernel just disabling the debug options with the
FreeBSD-CURRENT branch.  After rebooting to the custom kernel, I tried to
compile the netmap tools (with some changes) and ran into the above error.

How to get the netmap tools compiled in this scenario? Am i missing
something?
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Netmap bridge not working with 10G Ethernet ports

2020-11-25 Thread Rajesh Kumar
Hi Vincenzo,


On Tue, Nov 24, 2020 at 8:54 PM Vincenzo Maffione 
wrote:

>
> Yeah, it's weird because axgbe also uses iflib(4). If the driver exposes
> NIC head/tail pointers (sysctl) it may be useful to check what happens
> there.
> It may be that the NIC is dropping these packets for some reason.
>

Looks like, "ifdi_promisc_set" routine of the driver is not getting
triggered properly.  Forcibly setting the promisc mode from driver is
solving the packet drop issue. Now I see the ARP reply packet.

axgbe has split header support, which causes trouble starting from ping
packets due to incompatibility with Iflib/Netmap and associated utilities.
I made some changes to the driver, Iflib and Netmap utilities and having
the netmap bridge working with axgbe now.

I am working on a clean fix for promisc mode setting and incompatibility
issue. Will submit the changes for review once done.

Thanks for your inputs and support.

Done in r367920.
>

Thank you.

Thanks,
Rajesh.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Netmap bridge not working with 10G Ethernet ports

2020-11-23 Thread Rajesh Kumar
Hi Vincenzo,

Thanks for pointing this.

On Sat, Nov 21, 2020 at 10:40 PM Vincenzo Maffione 
wrote:

> # ifconfig ix0 promisc
> # ifconfig ix1 promisc
>
> This is an additional requirement when using netmap bridge, because that
> is not done automatically (differently from what happens with if_bridge(4)).
> If promisc is not enabled, the NIC will drop any unicast packet that is
> not directed to the NIC's address (e.g. the ARP reply in your case).
> Broadcast packets will of course pass (e.g. the ARP request). This explains
> the absence of IRQs and the head/tail pointers not being updated.
> So no bugs AFAIK.
>

Setting the interfaces in promiscous mode makes things to work properly.

I tried the same with AMD Ports and it's still not working.  I believe this
is something specific to if_axp driver. I will see what is going wrong with
packet forwarding with AMD ports. Thanks for pointing this out.

I figured it out the hard way, but it was actually also documented on the
> github (https://github.com/luigirizzo/netmap#receiver-does-not-receive).
> I will add it to the netmap bridge man page.
>

That would be helpful. Thanks.


> Il giorno sab 21 nov 2020 alle ore 17:06 Vincenzo Maffione <
> vmaffi...@freebsd.org> ha scritto:
>
>>
>>
>> Il giorno ven 20 nov 2020 alle ore 14:31 Rajesh Kumar 
>> ha scritto:
>>
>>> Hi Vincenzo,
>>>
>>> On Fri, Nov 20, 2020 at 3:20 AM Vincenzo Maffione 
>>> wrote:
>>>
>>>>
>>>> Ok, now it makes sense. Thanks for clarifying. I see that if_axp(4)
>>>> uses iflib(4). This means that actually if_axp(4) has native netmap
>>>> support, because iflib(4) has native netmap support.
>>>>
>>>>
>>> It means that the driver has some modifications to allow netmap to
>>>> directly program the NIC rings. These modifications are mostly the
>>>> per-driver txsync and rxsyng routines.
>>>> In case of iflib(4) drivers, these modifications are provided directly
>>>> within the iflib(4) code, and therefore any driver using iflib will have
>>>> native netmap support.
>>>>
>>>
>>> Thanks for clarifying on the Native Netmap support.
>>>
>>> Ok, this makes sense, because also ix(4) uses iflib, and therefore you
>>>> are basically hitting the same issue of if_axp(4)
>>>> At this point I must think that there is still some issue with the
>>>> interaction between iflib(4) and netmap(4).
>>>>
>>>
>>> Ok. Let me know if any more debug info needed in this part.
>>>
>>> I see. This info may be useful. Have you tried to look at interrupts
>>>> (e.g. `vmstat -i`), to see if "ix0" gets any RX interrupts (for the missing
>>>> ARP replies)?
>>>>
>>>
>>> It's interesting here. When I try with Intel NIC card. I see atleast 1
>>> interrupt raised.  But not sure whether that is for ARP reply. Because,
>>> when I try to dump the packet from "bridge"(modified) utility, I don't see
>>> any ARP reply packet getting dumped.
>>>
>>>
>>> *irq59: ix0:rxq01  0 (only 1 interrupt
>>> on the opposite side)*irq67: ix0:aq  2
>>>  0
>>>
>>> *irq68: ix1:rxq03  0  (you can see 3
>>> interrupts for 3 ARP requests from System 1)*irq76: ix1:aq
>>>  2  0
>>>
>>> The same experiment, when I try with AMD inbuilt ports, I don't see that
>>> 1 interrupt also raised.
>>>
>>> irq81: ax0:dev_irq16  0
>>> irq83: ax0  2541  4
>>> irq93: ax1:dev_irq27  0
>>> irq95: ax1  2371  3
>>> *irq97: ax1:rxq03  0 (you can see 3
>>> interrupts for 3 ARP requests from System 1, but no interrupt is seen from
>>> "ax0:rxq0" for ARP reply from System 2)*
>>>
>>> I will do some more testing to see whether this behavior is consistent
>>> or intermittent.
>>>
>>> Also the igb(4) driver is using iflib(4). So the involved netmap code is
>>>> the same as ix(4) and if_axp(4).
>>>> This is something that I'm not able to understand right now.
>>>> It does not look like something related to offloads.
>>>>
>>>> Next week I will try to see if I can reproduce your issue with em(4),
>>>> and report back. That's still an Intel 

Re: Netmap bridge not working with 10G Ethernet ports

2020-11-21 Thread Vincenzo Maffione
Hi Rajesh,
  I think the issue here is simply that you have not enabled promiscuous
mode on your interfaces.
# ifconfig ix0 promisc
# ifconfig ix1 promisc

This is an additional requirement when using netmap bridge, because that is
not done automatically (differently from what happens with if_bridge(4)).
If promisc is not enabled, the NIC will drop any unicast packet that is not
directed to the NIC's address (e.g. the ARP reply in your case). Broadcast
packets will of course pass (e.g. the ARP request). This explains the
absence of IRQs and the head/tail pointers not being updated.
So no bugs AFAIK.

I figured it out the hard way, but it was actually also documented on the
github (https://github.com/luigirizzo/netmap#receiver-does-not-receive).
I will add it to the netmap bridge man page.

Cheers,
  Vincenzo


Il giorno sab 21 nov 2020 alle ore 17:06 Vincenzo Maffione <
vmaffi...@freebsd.org> ha scritto:

>
>
> Il giorno ven 20 nov 2020 alle ore 14:31 Rajesh Kumar 
> ha scritto:
>
>> Hi Vincenzo,
>>
>> On Fri, Nov 20, 2020 at 3:20 AM Vincenzo Maffione 
>> wrote:
>>
>>>
>>> Ok, now it makes sense. Thanks for clarifying. I see that if_axp(4) uses
>>> iflib(4). This means that actually if_axp(4) has native netmap support,
>>> because iflib(4) has native netmap support.
>>>
>>>
>> It means that the driver has some modifications to allow netmap to
>>> directly program the NIC rings. These modifications are mostly the
>>> per-driver txsync and rxsyng routines.
>>> In case of iflib(4) drivers, these modifications are provided directly
>>> within the iflib(4) code, and therefore any driver using iflib will have
>>> native netmap support.
>>>
>>
>> Thanks for clarifying on the Native Netmap support.
>>
>> Ok, this makes sense, because also ix(4) uses iflib, and therefore you
>>> are basically hitting the same issue of if_axp(4)
>>> At this point I must think that there is still some issue with the
>>> interaction between iflib(4) and netmap(4).
>>>
>>
>> Ok. Let me know if any more debug info needed in this part.
>>
>> I see. This info may be useful. Have you tried to look at interrupts
>>> (e.g. `vmstat -i`), to see if "ix0" gets any RX interrupts (for the missing
>>> ARP replies)?
>>>
>>
>> It's interesting here. When I try with Intel NIC card. I see atleast 1
>> interrupt raised.  But not sure whether that is for ARP reply. Because,
>> when I try to dump the packet from "bridge"(modified) utility, I don't see
>> any ARP reply packet getting dumped.
>>
>>
>> *irq59: ix0:rxq01  0 (only 1 interrupt on
>> the opposite side)*irq67: ix0:aq  2  0
>>
>> *irq68: ix1:rxq03  0  (you can see 3
>> interrupts for 3 ARP requests from System 1)*irq76: ix1:aq
>>2  0
>>
>> The same experiment, when I try with AMD inbuilt ports, I don't see that
>> 1 interrupt also raised.
>>
>> irq81: ax0:dev_irq16  0
>> irq83: ax0  2541  4
>> irq93: ax1:dev_irq27  0
>> irq95: ax1  2371  3
>> *irq97: ax1:rxq03  0 (you can see 3
>> interrupts for 3 ARP requests from System 1, but no interrupt is seen from
>> "ax0:rxq0" for ARP reply from System 2)*
>>
>> I will do some more testing to see whether this behavior is consistent or
>> intermittent.
>>
>> Also the igb(4) driver is using iflib(4). So the involved netmap code is
>>> the same as ix(4) and if_axp(4).
>>> This is something that I'm not able to understand right now.
>>> It does not look like something related to offloads.
>>>
>>> Next week I will try to see if I can reproduce your issue with em(4),
>>> and report back. That's still an Intel driver using iflib(4).
>>>
>>
>> The "igb(4)" driver, with which things are working now is related to
>> em(4) driver (may be for newer hardware version).  Initially we faced
>> similar issue with igb(4) driver as well. After reverting the following
>> commits, things started to work.  Thanks to Stephan Dewt (copied) for
>> pointing this.  But it still fails with ix(4) driver and if_axp(4) driver.
>>
>>
>> https://github.com/freebsd/freebsd/commit/e12efc2c9e434075d0740e2e2e9e2fca2ad5f7cf
>>
>> Thanks for providing your inputs on this issue Vincenzo.  Let me know for
>> any more

Re: Netmap bridge not working with 10G Ethernet ports

2020-11-21 Thread Vincenzo Maffione
Il giorno ven 20 nov 2020 alle ore 14:31 Rajesh Kumar 
ha scritto:

> Hi Vincenzo,
>
> On Fri, Nov 20, 2020 at 3:20 AM Vincenzo Maffione 
> wrote:
>
>>
>> Ok, now it makes sense. Thanks for clarifying. I see that if_axp(4) uses
>> iflib(4). This means that actually if_axp(4) has native netmap support,
>> because iflib(4) has native netmap support.
>>
>>
> It means that the driver has some modifications to allow netmap to
>> directly program the NIC rings. These modifications are mostly the
>> per-driver txsync and rxsyng routines.
>> In case of iflib(4) drivers, these modifications are provided directly
>> within the iflib(4) code, and therefore any driver using iflib will have
>> native netmap support.
>>
>
> Thanks for clarifying on the Native Netmap support.
>
> Ok, this makes sense, because also ix(4) uses iflib, and therefore you are
>> basically hitting the same issue of if_axp(4)
>> At this point I must think that there is still some issue with the
>> interaction between iflib(4) and netmap(4).
>>
>
> Ok. Let me know if any more debug info needed in this part.
>
> I see. This info may be useful. Have you tried to look at interrupts (e.g.
>> `vmstat -i`), to see if "ix0" gets any RX interrupts (for the missing ARP
>> replies)?
>>
>
> It's interesting here. When I try with Intel NIC card. I see atleast 1
> interrupt raised.  But not sure whether that is for ARP reply. Because,
> when I try to dump the packet from "bridge"(modified) utility, I don't see
> any ARP reply packet getting dumped.
>
>
> *irq59: ix0:rxq01  0 (only 1 interrupt on
> the opposite side)*irq67: ix0:aq  2  0
>
> *irq68: ix1:rxq03  0  (you can see 3
> interrupts for 3 ARP requests from System 1)*irq76: ix1:aq
>2  0
>
> The same experiment, when I try with AMD inbuilt ports, I don't see that 1
> interrupt also raised.
>
> irq81: ax0:dev_irq16  0
> irq83: ax0  2541  4
> irq93: ax1:dev_irq27  0
> irq95: ax1  2371  3
> *irq97: ax1:rxq03  0 (you can see 3
> interrupts for 3 ARP requests from System 1, but no interrupt is seen from
> "ax0:rxq0" for ARP reply from System 2)*
>
> I will do some more testing to see whether this behavior is consistent or
> intermittent.
>
> Also the igb(4) driver is using iflib(4). So the involved netmap code is
>> the same as ix(4) and if_axp(4).
>> This is something that I'm not able to understand right now.
>> It does not look like something related to offloads.
>>
>> Next week I will try to see if I can reproduce your issue with em(4), and
>> report back. That's still an Intel driver using iflib(4).
>>
>
> The "igb(4)" driver, with which things are working now is related to em(4)
> driver (may be for newer hardware version).  Initially we faced similar
> issue with igb(4) driver as well. After reverting the following commits,
> things started to work.  Thanks to Stephan Dewt (copied) for pointing
> this.  But it still fails with ix(4) driver and if_axp(4) driver.
>
>
> https://github.com/freebsd/freebsd/commit/e12efc2c9e434075d0740e2e2e9e2fca2ad5f7cf
>
> Thanks for providing your inputs on this issue Vincenzo.  Let me know for
> any more details that you need.
>
>
I was able to reproduce your issue on FreeBSD-CURRENT running within a QEMU
VM, with two em(4) devices and the netmap bridge running between them.
I see the ARP request packet received on em0 (with associated IRQ), and
forwarded on em1. However, the ARP reply coming on em1 does not trigger an
IRQ on em1, and indeed the NIC RX head/tail pointers are not incremented as
they should (`sysctl -a | grep em.1 | grep queue_rx`) ... that is weird,
and lets me think that the issue is more likely driver-related than
netmap/iflib-related.
In any case, would you mind filing the issue on the bugzilla, so that we
can properly track this issue?

Thanks,
  Vincenzo


> Thanks,
> Rajesh.
>
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Netmap bridge not working with 10G Ethernet ports

2020-11-20 Thread Rajesh Kumar
Hi Vincenzo,

On Fri, Nov 20, 2020 at 3:20 AM Vincenzo Maffione 
wrote:

>
> Ok, now it makes sense. Thanks for clarifying. I see that if_axp(4) uses
> iflib(4). This means that actually if_axp(4) has native netmap support,
> because iflib(4) has native netmap support.
>
>
It means that the driver has some modifications to allow netmap to directly
> program the NIC rings. These modifications are mostly the per-driver txsync
> and rxsyng routines.
> In case of iflib(4) drivers, these modifications are provided directly
> within the iflib(4) code, and therefore any driver using iflib will have
> native netmap support.
>

Thanks for clarifying on the Native Netmap support.

Ok, this makes sense, because also ix(4) uses iflib, and therefore you are
> basically hitting the same issue of if_axp(4)
> At this point I must think that there is still some issue with the
> interaction between iflib(4) and netmap(4).
>

Ok. Let me know if any more debug info needed in this part.

I see. This info may be useful. Have you tried to look at interrupts (e.g.
> `vmstat -i`), to see if "ix0" gets any RX interrupts (for the missing ARP
> replies)?
>

It's interesting here. When I try with Intel NIC card. I see atleast 1
interrupt raised.  But not sure whether that is for ARP reply. Because,
when I try to dump the packet from "bridge"(modified) utility, I don't see
any ARP reply packet getting dumped.


*irq59: ix0:rxq01  0 (only 1 interrupt on
the opposite side)*irq67: ix0:aq  2  0

*irq68: ix1:rxq03  0  (you can see 3
interrupts for 3 ARP requests from System 1)*irq76: ix1:aq
 2  0

The same experiment, when I try with AMD inbuilt ports, I don't see that 1
interrupt also raised.

irq81: ax0:dev_irq16  0
irq83: ax0  2541  4
irq93: ax1:dev_irq27  0
irq95: ax1  2371  3
*irq97: ax1:rxq03  0 (you can see 3
interrupts for 3 ARP requests from System 1, but no interrupt is seen from
"ax0:rxq0" for ARP reply from System 2)*

I will do some more testing to see whether this behavior is consistent or
intermittent.

Also the igb(4) driver is using iflib(4). So the involved netmap code is
> the same as ix(4) and if_axp(4).
> This is something that I'm not able to understand right now.
> It does not look like something related to offloads.
>
> Next week I will try to see if I can reproduce your issue with em(4), and
> report back. That's still an Intel driver using iflib(4).
>

The "igb(4)" driver, with which things are working now is related to em(4)
driver (may be for newer hardware version).  Initially we faced similar
issue with igb(4) driver as well. After reverting the following commits,
things started to work.  Thanks to Stephan Dewt (copied) for pointing
this.  But it still fails with ix(4) driver and if_axp(4) driver.

https://github.com/freebsd/freebsd/commit/e12efc2c9e434075d0740e2e2e9e2fca2ad5f7cf

Thanks for providing your inputs on this issue Vincenzo.  Let me know for
any more details that you need.

Thanks,
Rajesh.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Netmap bridge not working with 10G Ethernet ports

2020-11-19 Thread Vincenzo Maffione
Il giorno gio 19 nov 2020 alle ore 12:28 Rajesh Kumar 
ha scritto:

> Hi Vincenzo,
>
> Thanks for your reply.
>
> On Thu, Nov 19, 2020 at 3:16 AM Vincenzo Maffione 
> wrote:
>
>>
>> This looks like if_axe(4) driver, and therefore there's no native netmap
>> support, which means you are falling back on
>> the emulated netmap adapter. Are these USB dongles? If so, how can they
>> be 10G?
>>
>
> The Driver I am working with is "if_axp" (sys/dev/axgbe).  This is AMD
> 10Gigabit Ethernet Driver. This is recently committed upstream. Yes, it
> doesn't have a Native netmap support, but uses the netmap stack which is
> existing already.  These are inbuilt SFP ports with our test board and not
> USB dongles.
>

Ok, now it makes sense. Thanks for clarifying. I see that if_axp(4) uses
iflib(4). This means that actually if_axp(4) has native netmap support,
because iflib(4) has native netmap support.


> Does Native netmap mean the hardware capability which needs to be
> programmed appropriately from driver side?  Any generic documentation
> regarding the same?
>

It means that the driver has some modifications to allow netmap to directly
program the NIC rings. These modifications are mostly the per-driver txsync
and rxsyng routines.
In case of iflib(4) drivers, these modifications are provided directly
within the iflib(4) code, and therefore any driver using iflib will have
native netmap support.


>
>> In this kind of configuration it is mandatory to disable all the NIC
>> offloads, because netmap does not program the NIC
>> to honor them, e.g.:
>>
>> # ifconfig ax0 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6
>> # ifconfig ax1 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6
>>
>
> Earlier, I haven't tried disabling the Offload capabilities.  But I tried
> now, but it still behaves the same way.  ARP replies doesn't seem to reach
> the bridge (or dropped) to be forwarded.  I will collect the details for
> AMD driver. Tried the same test with another 10G card (Intel "ix" driver)
> also exhibits similar behavior.  Details below.
>

Ok, this makes sense, because also ix(4) uses iflib, and therefore you are
basically hitting the same issue of if_axp(4)
At this point I must think that there is still some issue with the
interaction between iflib(4) and netmap(4).


>
>
>> a) I tried with another vendor 10G NIC card. It behaves the same way. So
>>> this issue doesn't seem to be generic and not hardware specific.
>>>
>>
>> Which driver are those NICs using? That makes the difference. I guess
>> it's still a driver with no native netmap support, hence
>> you are using the same emulated adapter
>>
>
> I am using the "ix" driver (Intel 10G NIC adapter).  I guess this driver
> also doesn't support Native Netmap.  Please correct me if I am wrong.  I
> tried disabling the offload capabilities with this device/driver and tested
> and still observed the netmap bridging fails.
>

As I stated above, ix(4) has netmap support, like any iflib(4) driver.


> root@fbsd_cur# sysctl dev.ix.0 | grep tx_packets
> dev.ix.0.queue7.tx_packets: 0
> dev.ix.0.queue6.tx_packets: 0
> dev.ix.0.queue5.tx_packets: 0
> dev.ix.0.queue4.tx_packets: 0
> dev.ix.0.queue3.tx_packets: 0
> dev.ix.0.queue2.tx_packets: 0
> dev.ix.0.queue1.tx_packets: 0
> *dev.ix.0.queue0.tx_packets: 3*
> root@fbsd_cur# sysctl dev.ix.0 | grep rx_packets
> dev.ix.0.queue7.rx_packets: 0
> dev.ix.0.queue6.rx_packets: 0
> dev.ix.0.queue5.rx_packets: 0
> dev.ix.0.queue4.rx_packets: 0
> dev.ix.0.queue3.rx_packets: 0
> dev.ix.0.queue2.rx_packets: 0
> dev.ix.0.queue1.rx_packets: 0
> dev.ix.0.queue0.rx_packets: 0
> root@fbsd_cur # sysctl dev.ix.1 | grep tx_packets
> dev.ix.1.queue7.tx_packets: 0
> dev.ix.1.queue6.tx_packets: 0
> dev.ix.1.queue5.tx_packets: 0
> dev.ix.1.queue4.tx_packets: 0
> dev.ix.1.queue3.tx_packets: 0
> dev.ix.1.queue2.tx_packets: 0
> dev.ix.1.queue1.tx_packets: 0
> dev.ix.1.queue0.tx_packets: 0
> root@fbsd_cur # sysctl dev.ix.1 | grep rx_packets
> dev.ix.1.queue7.rx_packets: 0
> dev.ix.1.queue6.rx_packets: 0
> dev.ix.1.queue5.rx_packets: 0
> dev.ix.1.queue4.rx_packets: 0
> dev.ix.1.queue3.rx_packets: 0
> dev.ix.1.queue2.rx_packets: 0
> dev.ix.1.queue1.rx_packets: 0
>
> *dev.ix.1.queue0.rx_packets: 3*
>
> You can see "ix1" received 3 packets (ARP requests) from system 1 and
> transmitted 3 packets to system 2 via "ix0". But ARP reply from system 2 is
> not captured or forwared properly.
>

I see. This info may be useful. Have you tried to look at interrupts (e.g.
`vmstat -i`), to see if "ix0" gets any RX interrupts (for the missing ARP
replies)?


&g

Re: Netmap bridge not working with 10G Ethernet ports

2020-11-19 Thread Rajesh Kumar
Hi Vincenzo,

Thanks for your reply.

On Thu, Nov 19, 2020 at 3:16 AM Vincenzo Maffione 
wrote:

>
> This looks like if_axe(4) driver, and therefore there's no native netmap
> support, which means you are falling back on
> the emulated netmap adapter. Are these USB dongles? If so, how can they be
> 10G?
>

The Driver I am working with is "if_axp" (sys/dev/axgbe).  This is AMD
10Gigabit Ethernet Driver. This is recently committed upstream. Yes, it
doesn't have a Native netmap support, but uses the netmap stack which is
existing already.  These are inbuilt SFP ports with our test board and not
USB dongles.

Does Native netmap mean the hardware capability which needs to be
programmed appropriately from driver side?  Any generic documentation
regarding the same?


> In this kind of configuration it is mandatory to disable all the NIC
> offloads, because netmap does not program the NIC
> to honor them, e.g.:
>
> # ifconfig ax0 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6
> # ifconfig ax1 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6
>

Earlier, I haven't tried disabling the Offload capabilities.  But I tried
now, but it still behaves the same way.  ARP replies doesn't seem to reach
the bridge (or dropped) to be forwarded.  I will collect the details for
AMD driver. Tried the same test with another 10G card (Intel "ix" driver)
also exhibits similar behavior.  Details below.


> a) I tried with another vendor 10G NIC card. It behaves the same way. So
>> this issue doesn't seem to be generic and not hardware specific.
>>
>
> Which driver are those NICs using? That makes the difference. I guess it's
> still a driver with no native netmap support, hence
> you are using the same emulated adapter
>

I am using the "ix" driver (Intel 10G NIC adapter).  I guess this driver
also doesn't support Native Netmap.  Please correct me if I am wrong.  I
tried disabling the offload capabilities with this device/driver and tested
and still observed the netmap bridging fails.

root@fbsd_cur# sysctl dev.ix.0 | grep tx_packets
dev.ix.0.queue7.tx_packets: 0
dev.ix.0.queue6.tx_packets: 0
dev.ix.0.queue5.tx_packets: 0
dev.ix.0.queue4.tx_packets: 0
dev.ix.0.queue3.tx_packets: 0
dev.ix.0.queue2.tx_packets: 0
dev.ix.0.queue1.tx_packets: 0
*dev.ix.0.queue0.tx_packets: 3*
root@fbsd_cur# sysctl dev.ix.0 | grep rx_packets
dev.ix.0.queue7.rx_packets: 0
dev.ix.0.queue6.rx_packets: 0
dev.ix.0.queue5.rx_packets: 0
dev.ix.0.queue4.rx_packets: 0
dev.ix.0.queue3.rx_packets: 0
dev.ix.0.queue2.rx_packets: 0
dev.ix.0.queue1.rx_packets: 0
dev.ix.0.queue0.rx_packets: 0
root@fbsd_cur # sysctl dev.ix.1 | grep tx_packets
dev.ix.1.queue7.tx_packets: 0
dev.ix.1.queue6.tx_packets: 0
dev.ix.1.queue5.tx_packets: 0
dev.ix.1.queue4.tx_packets: 0
dev.ix.1.queue3.tx_packets: 0
dev.ix.1.queue2.tx_packets: 0
dev.ix.1.queue1.tx_packets: 0
dev.ix.1.queue0.tx_packets: 0
root@fbsd_cur # sysctl dev.ix.1 | grep rx_packets
dev.ix.1.queue7.rx_packets: 0
dev.ix.1.queue6.rx_packets: 0
dev.ix.1.queue5.rx_packets: 0
dev.ix.1.queue4.rx_packets: 0
dev.ix.1.queue3.rx_packets: 0
dev.ix.1.queue2.rx_packets: 0
dev.ix.1.queue1.rx_packets: 0

*dev.ix.1.queue0.rx_packets: 3*

You can see "ix1" received 3 packets (ARP requests) from system 1 and
transmitted 3 packets to system 2 via "ix0". But ARP reply from system 2 is
not captured or forwared properly.

You can see the checksum features disabled (except VLAN_HWCSIM) on both
interfaces.  And you can see both interfaces active and Link up.

root@fbsd_cur # ifconfig -a
ix0: flags=8862 metric 0 mtu 1500

options=48538b8
ether a0:36:9f:a5:49:90
media: Ethernet autoselect (100baseTX )
status: active
nd6 options=29

ix1: flags=8862 metric 0 mtu 1500

options=48538b8
ether a0:36:9f:a5:49:92
media: Ethernet autoselect (1000baseT )
status: active
nd6 options=29

>
> b) Trying with another vendor 1G NIC card, things are working.  So not
>> sure, what makes a difference here.  The ports in System 1 and System 2
>> are
>> USB attached Ethernet capable of maximum speed of 1G.  So does connecting
>> 1G to 10G bridge ports is having any impact?
>>
>
> I don't think so. On each p2p link the NICs will negotiate 1G speed.
> In any case, what driver was this one?
>

This is "igb" driver. Intel 1G NIC Card.

Thanks,
Rajesh.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Netmap bridge not working with 10G Ethernet ports

2020-11-18 Thread Vincenzo Maffione
Hi,

Il giorno mer 18 nov 2020 alle ore 08:13 Rajesh Kumar 
ha scritto:

> Hi,
>
> I am testing a 10G Network driver with Netmap "bridge" utility, where it
> doesn't seem to work. Here is my setup details.
>
> *System under Test:*  Running FreeBSD CURRENT.  Has two inbuilt 10G NIC
> ports.
> *System 1:* Running Ubuntu, whose network port is connected to Port1 of
> System Under Test
> *System 2:* Running FreeBSD CURRENT, whose network port is connected to
> Port 0 of System Under Test.
>
> Bridged the Port0 and Port1 of System Under Test using the Netmap "bridge"
> utility. Able to see interfaces coming up active and Link UP.
> # bridge -c -v -i netmap:ax0 -i netmap:ax1
>
> This looks like if_axe(4) driver, and therefore there's no native netmap
support, which means you are falling back on
the emulated netmap adapter. Are these USB dongles? If so, how can they be
10G?


> Then tried pinging from System 1 to System 2. It fails.
>
> *Observations:*
> 1. ARP request from System 1 goes to bridge port 1 (netmap_rxsync) and then
> forwarded to port 0 (netmap_txsync)
> 2. ARP request is received in System 2 (via bridge port 0) and ARP reply is
> being sent from System 2.
> 3. ARP reply from System 2 seems to be not reaching bridge port 0 to get
> forwarded to bridge 1 and hence to System 1.
> 4. Above 3 steps happen 3 times for ARP resolution cycle and then fails.
> Hence the ping fails.
>
> On Debugging, when the ARP reply is being sent from System 2, I don't see
> any interrupt triggered on the bridge port 0 in system under test.
>
> In this kind of configuration it is mandatory to disable all the NIC
offloads, because netmap does not program the NIC
to honor them, e.g.:

# ifconfig ax0 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6
# ifconfig ax1 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6


> Netstat in system under test, doesn't show any receive or drop counters
> incremented. But as I understand netstat capture the stats above the netmap
> stack. Hence not reflecting the counts.
>

Correct.


>
> *Note:*
> a) I tried with another vendor 10G NIC card. It behaves the same way. So
> this issue doesn't seem to be generic and not hardware specific.
>

Which driver are those NICs using? That makes the difference. I guess it's
still a driver with no native netmap support, hence
you are using the same emulated adapter.


> b) Trying with another vendor 1G NIC card, things are working.  So not
> sure, what makes a difference here.  The ports in System 1 and System 2 are
> USB attached Ethernet capable of maximum speed of 1G.  So does connecting
> 1G to 10G bridge ports is having any impact?
>

I don't think so. On each p2p link the NICs will negotiate 1G speed.
In any case, what driver was this one?


> c) We have verified the same 10G driver with pkt-gen utility and things are
> working. Facing issue only when using "bridge" utility.
>

That may be because pkt-gen does not care about checksums, whereas the
TCP/IP stack does.
Hence the need to disable offloads (see above).

Cheers,
  Vincenzo


> So, wondering how the ARP reply packet is getting lost here. Any ideas to
> debug?
>
> Thanks,
> Rajesh.
> ___
> freebsd-net@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Netmap bridge not working with 10G Ethernet ports

2020-11-17 Thread Rajesh Kumar
Hi,

I am testing a 10G Network driver with Netmap "bridge" utility, where it
doesn't seem to work. Here is my setup details.

*System under Test:*  Running FreeBSD CURRENT.  Has two inbuilt 10G NIC
ports.
*System 1:* Running Ubuntu, whose network port is connected to Port1 of
System Under Test
*System 2:* Running FreeBSD CURRENT, whose network port is connected to
Port 0 of System Under Test.

Bridged the Port0 and Port1 of System Under Test using the Netmap "bridge"
utility. Able to see interfaces coming up active and Link UP.
# bridge -c -v -i netmap:ax0 -i netmap:ax1

Then tried pinging from System 1 to System 2. It fails.

*Observations:*
1. ARP request from System 1 goes to bridge port 1 (netmap_rxsync) and then
forwarded to port 0 (netmap_txsync)
2. ARP request is received in System 2 (via bridge port 0) and ARP reply is
being sent from System 2.
3. ARP reply from System 2 seems to be not reaching bridge port 0 to get
forwarded to bridge 1 and hence to System 1.
4. Above 3 steps happen 3 times for ARP resolution cycle and then fails.
Hence the ping fails.

On Debugging, when the ARP reply is being sent from System 2, I don't see
any interrupt triggered on the bridge port 0 in system under test.

Netstat in system under test, doesn't show any receive or drop counters
incremented. But as I understand netstat capture the stats above the netmap
stack. Hence not reflecting the counts.

*Note:*
a) I tried with another vendor 10G NIC card. It behaves the same way. So
this issue doesn't seem to be generic and not hardware specific.
b) Trying with another vendor 1G NIC card, things are working.  So not
sure, what makes a difference here.  The ports in System 1 and System 2 are
USB attached Ethernet capable of maximum speed of 1G.  So does connecting
1G to 10G bridge ports is having any impact?
c) We have verified the same 10G driver with pkt-gen utility and things are
working. Facing issue only when using "bridge" utility.

So, wondering how the ARP reply packet is getting lost here. Any ideas to
debug?

Thanks,
Rajesh.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-11-11 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #47 from commit-h...@freebsd.org ---
A commit references this bug:

Author: vmaffione
Date: Wed Nov 11 21:27:17 UTC 2020
New revision: 367599
URL: https://svnweb.freebsd.org/changeset/base/367599

Log:
  MFC r367093, r367117

  iflib: add per-tx-queue netmap timer

  The way netmap TX is handled in iflib when TX interrupts are not
  used (IFC_NETMAP_TX_IRQ not set) has some issues:
- The netmap_tx_irq() function gets called by iflib_timer(), which
  gets scheduled with tick granularity (hz). This is not frequent
  enough for 10Gbps NICs and beyond (e.g., ixgbe or ixl). The end
  result is that the transmitting netmap application is not woken
  up fast enough to saturate the link with small packets.
- The iflib_timer() functions also calls isc_txd_credits_update()
  to ask for more TX completion updates. However, this violates
  the netmap requirement that only txsync can access the TX queue
  for datapath operations. Only netmap_tx_irq() may be called out
  of the txsync context.

  This change introduces per-tx-queue netmap timers, using microsecond
  granularity to ensure that netmap_tx_irq() can be called often enough
  to allow for maximum packet rate. The timer routine simply calls
  netmap_tx_irq() to wake up the netmap application. The latter will
  wake up and call txsync to collect TX completion updates.

  This change brings back line rate speed with small packets for ixgbe.
  For the time being, timer expiration is hardcoded to 90 microseconds,
  in order to avoid introducing a new sysctl.
  We may eventually implement an adaptive expiration period or use another
  deferred work mechanism in place of timers.

  Also, fix the timers usage to make sure that each queue is serviced
  by a different CPU.

  PR: 248652
  Reported by:s...@efficientip.com

Changes:
_U  stable/12/
  stable/12/sys/net/iflib.c

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 247647] if_vmx(4): Page fault when opening netmap port (IFLIB/DMA)

2020-10-29 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=247647

Zhenlei Huang  changed:

   What|Removed |Added

 Status|In Progress |Closed
 Resolution|--- |FIXED

--- Comment #27 from Zhenlei Huang  ---
Repeating on FreeBSD 12.2-RELEASE, this bug can not be reproduced. Close as
fixed.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-28 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #46 from commit-h...@freebsd.org ---
A commit references this bug:

Author: vmaffione
Date: Wed Oct 28 21:06:18 UTC 2020
New revision: 367117
URL: https://svnweb.freebsd.org/changeset/base/367117

Log:
  iflib: fix typo bug introduced by r367093

  Code was supposed to call callout_reset_sbt_on() rather than
  callout_reset_sbt(). This resulted into passing a "cpu" value
  to a "flag" argument. A recipe for subtle errors.

  PR:   248652
  Reported by:  s...@efficientip.com
  MFC with: r367093

Changes:
  head/sys/net/iflib.c

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-28 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #45 from Vincenzo Maffione  ---
(In reply to Sylvain Galliano from comment #43)
Ugh.
Thanks for reporting.

I indeed introduced a subtle typo bug, using callout_reset_sbt() rather than
callout_reset_sbt_on() (as intended). Therefore I was passing the "cpu" value
to the "flags" argument, resulting in a disaster. In your test this probably
triggered the C_DIRECT_EXEC flag of callout(9), so that the timer was being
executed in hardware interrupt context.

I uploaded the patch that is now consistent with the src tree (that I'm going
to fix right away).

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-28 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

Vincenzo Maffione  changed:

   What|Removed |Added

 Attachment #219121|0   |1
is obsolete||

--- Comment #44 from Vincenzo Maffione  ---
Created attachment 219179
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=219179=edit
Cleaned up netmap tx timer (bugfixes)

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-28 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #43 from Sylvain Galliano  ---
I made same tests on vmware + vmxnet NIC + latest patch and I got a panic:

spin lock 0xf80003079cc0 (turnstile lock) held by 0xfe0009607e00 (tid
16) too long
panic: spin lock held too long
cpuid = 1
time = 1603884508
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfe0008480680
vpanic() at vpanic+0x182/frame 0xfe00084806d0
panic() at panic+0x43/frame 0xfe0008480730
_mtx_lock_indefinite_check() at _mtx_lock_indefinite_check+0x64/frame
0xfe0008480740
_mtx_lock_spin_cookie() at _mtx_lock_spin_cookie+0xd5/frame 0xfe00084807b0
turnstile_trywait() at turnstile_trywait+0xe3/frame 0xfe00084807e0
__mtx_lock_sleep() at __mtx_lock_sleep+0x119/frame 0xfe0008480870
doselwakeup() at doselwakeup+0x179/frame 0xfe00084808c0
nm_os_selwakeup() at nm_os_selwakeup+0x13/frame 0xfe00084808e0
netmap_notify() at netmap_notify+0x3d/frame 0xfe0008480900
softclock_call_cc() at softclock_call_cc+0x13d/frame 0xfe00084809a0
callout_process() at callout_process+0x1c0/frame 0xfe0008480a10
handleevents() at handleevents+0x188/frame 0xfe0008480a50
timercb() at timercb+0x24e/frame 0xfe0008480aa0
lapic_handle_timer() at lapic_handle_timer+0x9b/frame 0xfe0008480ad0
Xtimerint() at Xtimerint+0xb1/frame 0xfe0008480ad0
--- interrupt, rip = 0x80f5bd46, rsp = 0xfe0008480ba0, rbp =
0xfe0008480ba0 ---
acpi_cpu_c1() at acpi_cpu_c1+0x6/frame 0xfe0008480ba0
acpi_cpu_idle() at acpi_cpu_idle+0x2eb/frame 0xfe0008480bf0
cpu_idle_acpi() at cpu_idle_acpi+0x3e/frame 0xfe0008480c10
cpu_idle() at cpu_idle+0x9f/frame 0xfe0008480c30
sched_idletd() at sched_idletd+0x2e4/frame 0xfe0008480cf0
fork_exit() at fork_exit+0x7e/frame 0xfe0008480d30
fork_trampoline() at fork_trampoline+0xe/frame 0xfe0008480d30
--- trap 0, rip = 0, rsp = 0, rbp = 0 ---

I used the first patch you sent (Draft patch to test the netmap tx timer), no
issue this time.

the only major difference I can see between 2 patches (except sysctl) is:
+   txq->ift_timer.c_cpu = cpu;
and
+   txq->ift_netmap_timer.c_cpu = cpu;

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-27 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #42 from commit-h...@freebsd.org ---
A commit references this bug:

Author: vmaffione
Date: Tue Oct 27 21:53:33 UTC 2020
New revision: 367093
URL: https://svnweb.freebsd.org/changeset/base/367093

Log:
  iflib: add per-tx-queue netmap timer

  The way netmap TX is handled in iflib when TX interrupts are not
  used (IFC_NETMAP_TX_IRQ not set) has some issues:
- The netmap_tx_irq() function gets called by iflib_timer(), which
  gets scheduled with tick granularity (hz). This is not frequent
  enough for 10Gbps NICs and beyond (e.g., ixgbe or ixl). The end
  result is that the transmitting netmap application is not woken
  up fast enough to saturate the link with small packets.
- The iflib_timer() functions also calls isc_txd_credits_update()
  to ask for more TX completion updates. However, this violates
  the netmap requirement that only txsync can access the TX queue
  for datapath operations. Only netmap_tx_irq() may be called out
  of the txsync context.

  This change introduces per-tx-queue netmap timers, using microsecond
  granularity to ensure that netmap_tx_irq() can be called often enough
  to allow for maximum packet rate. The timer routine simply calls
  netmap_tx_irq() to wake up the netmap application. The latter will
  wake up and call txsync to collect TX completion updates.

  This change brings back line rate speed with small packets for ixgbe.
  For the time being, timer expiration is hardcoded to 90 microseconds,
  in order to avoid introducing a new sysctl.
  We may eventually implement an adaptive expiration period or use another
  deferred work mechanism in place of timers.

  Also, fix the timers usage to make sure that each queue is serviced
  by a different CPU.

  PR:   248652
  Reported by:  s...@efficientip.com
  MFC after:2 weeks

Changes:
  head/sys/net/iflib.c

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-27 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

Vincenzo Maffione  changed:

   What|Removed |Added

 Status|Open|In Progress

--- Comment #41 from Vincenzo Maffione  ---
(In reply to Sylvain Galliano from comment #40)
Thank you for confirming.
In the meanwhile I'll commit this change.

Maybe we should open a separate issue for the ixl regression? Now we know that
it is caused by the RS flag being set an all the TX descriptors.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-27 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #40 from Sylvain Galliano  ---
(In reply to Vincenzo Maffione from comment #39)

results are all good for ix (X520) NIC (+14M pps, same as FreeBSD 11)

No changes in ixl (same results as comment #16)

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-26 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #39 from Vincenzo Maffione  ---
(In reply to Sylvain Galliano from comment #37)
Ok thanks. It was worth a try. I guess we'll need some help from Intel here.

In the meanwhile, I would like to commit the netmap tx timer change only.
I attached a cleaned up patch, with an hardcoded value for the netmap timer.
I would avoid to add a new sysctl for something that may be changed again soon.

In any case, the patch is meant to improve a lot the current situation for both
ix and ixl.
Could you please run your tests again on ix and ixl to check that you get
numbers that are consistent with the ones you reported in comment n. 16?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-26 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #38 from Vincenzo Maffione  ---
Created attachment 219121
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=219121=edit
Cleaned up netmap tx timer patch (no sysctl)

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-26 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #37 from Sylvain Galliano  ---
(In reply to Vincenzo Maffione from comment #36)

I have tested last patch (netmap tx timer w/queue intr enable + honor
IPCP_TX_INTR in ixl_txd_encap): same results

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-25 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #36 from Vincenzo Maffione  ---
(In reply to Vincenzo Maffione from comment #35)
I would ask for advice from the Intel guys here...
I'm trying to compare stable/11 vs current, regarding how TX interrupts are
handled. It looks like in stable/11 MSI-x handlers are shared for the TX and RX
queue, while in current TX interrupts are not used.
Also, in stable/11 the interrupt handler seems to do a disable_queue and then
enable_queue, while on current I only see the enable_queue step
(IFDI_TX_QUEUE_INTR_ENABLE).

Therefore, in the last patch I also add the enable_queue step in the netmap
timer routine. It may be worth giving a try to see if this fixes the ixl issue.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-25 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #35 from Vincenzo Maffione  ---
Created attachment 219062
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=219062=edit
netmap tx timer w/queue intr enable + honor IPCP_TX_INTR in ixl_txd_encap

Extension of the last one (218932), to also call the IFDI tx queue interrupt
enable, similarly to what the iflib_timer() code already does.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-23 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #34 from Vincenzo Maffione  ---
(In reply to Sylvain Galliano from comment #33)
Ok, thanks. At this point it's clear that there are two indipendent issues that
slow down netmap-iflib on ix/ixl. The first is the lack of a per-tx-queue
netmap timer (or taskqueue). The second is the lack of descriptor writeback
moderation in ixl.
We can start by merging the timer patch, and then work on the separate ixl
issue.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-22 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #33 from Sylvain Galliano  ---
(In reply to Vincenzo Maffione from comment #32)

Here are the results:

ixl only patch, 6 queues, pkt-get WITHOUT -R:
710.623764 main_thread [2642] 38.621 Mpps (38.698 Mpkts 18.538 Gbps in 1002000
usec) 512.00 avg_batch 9 min_space

ixl only patch, 1 queue, pkt-get WITHOUT -R:

670.168017 main_thread [2642] 0.000 pps (0.000 pkts 0.000 bps in 1009185 usec)
0.00 avg_batch 9 min_space
671.181833 main_thread [2642] 0.000 pps (0.000 pkts 0.000 bps in 1013816 usec)
0.00 avg_batch 9 min_space
672.171832 sender_body [1662] poll error on queue 0: timeout
672.171838 sender_body [1665] txring 513 513 513
672.191833 main_thread [2642] 0.000 pps (0.000 pkts 0.000 bps in 101 usec)
0.00 avg_batch 9 min_space

ixl only patch, 1 queue, pkt-get WITH -R:

-R 1000:
813.372070 main_thread [2642] 1.001 Kpps (1.002 Kpkts 480.718 Kbps in 1000503
usec) 3.00 avg_batch 9 min_space
-R 2000:
860.807010 main_thread [2642] 2.006 Kpps (2.010 Kpkts 962.692 Kbps in 1002190
usec) 6.00 avg_batch 9 min_space
...
(all intermediate -R value worked)
...
-R 1700:
057.160242 main_thread [2642] 17.001 Mpps (18.072 Mpkts 8.160 Gbps in 1063000
usec) 512.00 avg_batch 9 min_space
-R 1800:
030.167994 sender_body [1662] poll error on queue 0: timeout
030.168001 sender_body [1665] txring 513 513 513



ixl + timer patches, 1 queue, pkt-get WITH -R:
sysctl nm_tx_tmr_us=5

-R 1000:
261.886507 main_thread [2642] 1.001 Kpps (1.065 Kpkts 480.679 Kbps in 1063496
usec) 3.00 avg_batch 9 min_space
-R 2000:
279.365024 main_thread [2642] 2.000 Kpps (2.034 Kpkts 960.219 Kbps in 1016768
usec) 6.00 avg_batch 9 min_space
...
(all intermediate -R value worked)
...
-R 1700
388.372451 main_thread [2642] 17.000 Mpps (18.079 Mpkts 8.160 Gbps in 1063431
usec) 512.00 avg_batch 9 min_space
-R 1800
894.421917 main_thread [2642] 18.000 Mpps (18.036 Mpkts 8.640 Gbps in 1002001
usec) 512.00 avg_batch 9 min_space
and sometime an error
-R 1900
991.012912 sender_body [1662] poll error on queue 0: timeout
991.012920 sender_body [1665] txring 513 513 513

another run:
968.011919 sender_body [1662] poll error on queue 0: timeout
968.011926 sender_body [1665] txring 235 235 235

and another one:
112.008840 sender_body [1662] poll error on queue 0: timeout
112.008848 sender_body [1665] txring 95 95 95

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-21 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #32 from Vincenzo Maffione  ---
(In reply to Sylvain Galliano from comment #29)
Thanks again for your tests.

I'm inclined to think that the pkt-gen hang issue that you see is not directly
caused by the ixl patch.
Would you please try to test what happens if you only apply the ixl patch
(discarding all the changes related to the netmap timer)?
In the very end the change to ixl is orthogonal to the netmap timer issue.

Also, it would be useful to understand whether the hang problem comes from some
sort of race condition or not. For this purpose, you may try to use the -R
argument of pkt-gen (this time with the timer+ixl patch) to specify a maximum
rate in packets per second (pps). E.g. you could start from 1000 pps, check
that it does not hang, double the rate and repeat the process until you find a
critical rate that causes the hang. Unless this is not a race condition and the
hang happens at any rate.
When the hang happens, it may help to see the ring state, e.g. with the
following patch to pkt-gen. I expect to see head, cur and tail having the same
value. 

diff --git a/apps/pkt-gen/pkt-gen.c b/apps/pkt-gen/pkt-gen.c
index ef876f4f..19497fe9 100644
--- a/apps/pkt-gen/pkt-gen.c
+++ b/apps/pkt-gen/pkt-gen.c
@@ -1675,6 +1675,10 @@ sender_body(void *data)
break;
D("poll error on queue %d: %s", targ->me,
rv ? strerror(errno) : "timeout");
+   for (i = targ->nmd->first_tx_ring; i <=
targ->nmd->last_tx_ring; i++) {
+   txring = NETMAP_TXRING(nifp, i);
+   D("txring %u %u %u", txring->head, txring->cur,
txring->tail);
+   }
// goto quit;
}
if (pfd.revents & POLLERR) {

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-21 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #31 from Vincenzo Maffione  ---
(In reply to Krzysztof Galazka from comment #30)
Hi Krzysztof,
  I agree, and created a separate review for this possible change:
https://reviews.freebsd.org/D26896
It would be nice if you guys could run some tests to validate the change
against normal TCP/IP stack usage (e.g. non-netmap).

Speaking about the non-iflib driver, I guess it is acceptable for the ixl_xmit
routine to always set the report flag on the EOP packet.
However, this is not acceptable for netmap, and indeed the non-iflib netmap ixl
driver is only setting it twice per ring (see
https://github.com/freebsd/freebsd/blob/stable/11/sys/dev/netmap/if_ixl_netmap.h#L221-L223).
This is, in my opinion, what explains the huge difference in performance
between non-iflib and iflib for ixl.

If my understanding is correct, and according to our past experience with
netmap, the report flag will cause the NIC to initiate a DMA transaction to
either set the DD bit in the descriptor, or perform a memory write to update
the shadow TDH.
This is particularly expensive in netmap when done for each descriptor,
specially because netmap uses single-descriptor packets.
Interrupt moderation can also help a lot to mitigate the CPU overhead, but as
far as I see it does not limit the writeback DMA transactions, and therefore it
does not help in the netmap use-case. Moreover, my understanding is that iflib
is not using hardware interrupts on ixl (nor ix), but rather is using
"softirqs", so I guess that interrupt moderation does not play a role here. I
may be wrong on this last point.

I suspect the 1-queue pkt-gen hang problem may be due to something else, rather
than the report flag change. As a matter of facts, the same logic was working
fine on the non-iflib driver.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-21 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

Krzysztof Galazka  changed:

   What|Removed |Added

 CC||krzysztof.gala...@intel.com

--- Comment #30 from Krzysztof Galazka  ---
(In reply to Vincenzo Maffione from comment #28)

Hi Vincenzo,

Good catch! Thanks a lot! The non-iflib version of ixl also sets a request
status flag on all 'End of packet' descriptors. I'm guessing that the
difference in performance is related to a dynamic interrupt moderation, which
is disabled by default in iflib version of the driver. I'm a bit concerned
though about the 1 queue case. It think it would be good to put this fix in a
separate review and let our validation team run some tests. Would you like me
to do it or do you prefer to do it yourself?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-20 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #29 from Sylvain Galliano  ---
(In reply to Vincenzo Maffione from comment #28)

Yes, this is much better:

6 queues, nm_tx_tmr_us=5:
983.492185 main_thread [2639] 37.907 Mpps (37.945 Mpkts 18.196 Gbps in 1001000
usec) 512.00 avg_batch 9 min_space
cpu usage: 100%

but with this patch, something is wrong when using 1 queue:
110.079117 main_thread [2639] 0.000 pps (0.000 pkts 0.000 bps in 1003920 usec)
0.00 avg_batch 9 min_space
111.080184 main_thread [2639] 0.000 pps (0.000 pkts 0.000 bps in 1001066 usec)
0.00 avg_batch 9 min_space
111.714181 sender_body [1663] poll error on queue 0: timeout
112.089179 main_thread [2639] 0.000 pps (0.000 pkts 0.000 bps in 1008996 usec)
0.00 avg_batch 9 min_space
113.116178 main_thread [2639] 0.000 pps (0.000 pkts 0.000 bps in 1026999 usec)
0.00 avg_batch 9 min_space

(I've double checked by reverting with previous patch: no issue)

same errors when using 6 queues and pkt-gen with 6 threads (-p 6)

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-20 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #28 from Vincenzo Maffione  ---
(In reply to Sylvain Galliano from comment #26)
Thanks.

I just figured out that there may be a major flaw introduced by the porting of
ixl to iflib. This flaw should couse too many writebacks from the NIC to report
completed transmissions, even if iflib asks for a writeback once in a while.

Can you please run again your tests on ixl with the latest patch?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-20 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

Vincenzo Maffione  changed:

   What|Removed |Added

 Attachment #218866|0   |1
is obsolete||

--- Comment #27 from Vincenzo Maffione  ---
Created attachment 218932
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=218932=edit
netmap tx timer + honor IPI_TX_INTR in ixl txd_encap

Adds an unrelated fix on top of the first patch (218723).
The new fix should remove a major regression introduced by the ixl porting to
iflib.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-20 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #26 from Sylvain Galliano  ---
(In reply to Vincenzo Maffione from comment #25)

you are correct, using 1 queue on both ix/ixl NIC + tuning new sysctl, we have
same result as FreeBSD 11

Regarding avg_batch values on ixl, they are very low, not in the range you
expected:

6 queues, iflib.nm_tx_tmr_us=5
070.664142 main_thread [2639] 13.588 Mpps (13.602 Mpkts 6.522 Gbps in 1001000
usec) 20.87 avg_batch 9 min_space
cpu usage: 100%

even with 1 queue I got a low value for avg_batch:
283.855379 main_thread [2639] 12.147 Mpps (12.757 Mpkts 5.831 Gbps in 1050245
usec) 13.97 avg_batch 9 min_space
cpu usage: 100%

ix (X520) NIC have a good avg_batch (whatever the number of queue):
404.130120 main_thread [2639] 14.880 Mpps (14.895 Mpkts 7.143 Gbps in 1000999
usec) 436.04 avg_batch 9 min_space

I don't known if this can help you but I did another test specific to ixl NIC:
setting hw.ixl.enable_head_writeback=0

avg_batch is higher, pps a little better and we are not CPU bound this time:

6 queues:
603.651106 main_thread [2639] 17.384 Mpps (17.402 Mpkts 8.345 Gbps in 1001003
usec) 308.10 avg_batch 9 min_space
cpu usage: 71%

1 queue:
730.590104 main_thread [2639] 15.084 Mpps (15.416 Mpkts 7.241 Gbps in 1022004
usec) 442.64 avg_batch 9 min_space
cpu usage: 57%


Same test but using more threads on pkt-gen (-p 6), 6 queues:
995.887010 main_thread [2639] 17.327 Mpps (17.339 Mpkts 8.317 Gbps in 1000693
usec) 286.35 avg_batch 54 min_space
cpu usage: 197%

top -H:
  PID USERNAMEPRI NICE   SIZERES STATEC   TIMEWCPU COMMAND
70876 root-920   348M16M select   3   0:23  35.46%
pkt-gen{pkt-gen}
70876 root-920   348M16M CPU4 4   0:23  33.58%
pkt-gen{pkt-gen}
70876 root-920   348M16M RUN  6   0:23  33.44%
pkt-gen{pkt-gen}
70876 root-920   348M16M select   2   0:23  31.69%
pkt-gen{pkt-gen}
70876 root 450   348M16M CPU9 9   0:23  31.64%
pkt-gen{pkt-gen}
70876 root-920   348M16M select   6   0:23  31.24%
pkt-gen{pkt-gen}

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-19 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #25 from Vincenzo Maffione  ---
Sorry, my bad.
I read the code the wrong way, so the second patch is indeed useless. Please
forget about that. The patch is not ensuring timely TX slots recovery (as
pointed out in comment #23).

So it seems that the situation where we are losing against 11-stable is ixl
with 6 queues (or more in general, with more than 1 queue). The other
combinations (ix, or ixl/1q are on par). Is this correct?

Now, focusing on the ixl/6q case, and using the first patch I provided, do you
see a significant difference in average batch (as reported by pkt-gen) and
pkt-gen CPU utilization?
The avg_batch metric tells us how many packets we were able to send for each
txsync syscall. So the higher the better (at least up to 100/200).

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-19 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #24 from Sylvain Galliano  ---
(In reply to Vincenzo Maffione from comment #23)

After using 'Netmap tx timer + timely credits update' patch, I didn't notice
any difference on results.
Should I make some specific tests to confirm changes between 2 patches ?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-18 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #23 from Vincenzo Maffione  ---
(In reply to Sylvain Galliano from comment #21)
Thanks.
The CPU utilization at least tells us that we are not CPU bound.
Could you please perform some tests with the second patch?
It's basically the same as the first one, with a couple of changes that should
ensure a more timely recovery of consumed TX slots (and a more timely timer
firing).

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-18 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #22 from Vincenzo Maffione  ---
Created attachment 218866
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=218866=edit
Netmap tx timer + timely credits update

A small extension of the previous patch, which adds up the timely update of tx
credits and timer start.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-18 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #21 from Sylvain Galliano  ---
(In reply to Vincenzo Maffione from comment #19)

result using 1 queue:

ixl1: PCI Express Bus: Speed 8.0GT/s Width x8
ixl1: netmap queues/slots: TX 1/1024, RX 1/1024

sysctl dev.ixl.0.iflib.nm_tx_tmr_us=50

194.333930 main_thread [2638] 11.896 Mpps (11.902 Mpkts 5.710 Gbps in 1000497
usec) 341.33 avg_batch 9 min_space

pkt-gen cpu usage: 35%


sysctl dev.ixl.0.iflib.nm_tx_tmr_us=5

235.070929 main_thread [2638] 14.754 Mpps (15.543 Mpkts 7.082 Gbps in 1053521
usec) 390.14 avg_batch 9 min_space

pkt-gen cpu usage: 56%



sysctl dev.ixl.0.iflib.nm_tx_tmr_us=1

266.392925 main_thread [2638] 14.748 Mpps (14.762 Mpkts 7.079 Gbps in 1000998
usec) 407.41 avg_batch 9 min_space

pkt-gen cpu usage: 66%



Using 6 queues configured, max pps is 17Mpps, even when using low nm_tx_tmr_us
value (1 us)
pkt-gen cpu usage: 82%

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-18 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #20 from Vincenzo Maffione  ---
(In reply to vistalba from comment #17)
Of course you will need to apply the attached patch before "make kernel", e.g.
  $ cd /path/to/freebsd/kernel/sources
  $ patch -p1 < /path/to/attached/patch.diff

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-18 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #19 from Vincenzo Maffione  ---
(In reply to Sylvain Galliano from comment #16)

Thanks a lot.
In the XL710 case, have you tried with lower values of the timer, such as 50us
down to 5 us?
Is there any visible change?

Also, have you looked at pkt-gen CPU utilization? That's something that tells
you if you are CPU limited (unlikely) or rather still limited by the
"pseudo-interrupt rate" being too low.
For, instance, how does pkt-gen CPU utilization look in the case of XL710 and 1
queue (for simplicity, so that you have just a single thread)?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #18 from Michael Muenz  ---
(In reply to vistalba from comment #17)

- Install Vanilla FreeBSD12
- pkg install git
- cd /usr && git clone https://github.com/opnsense/tools
- cd tools && make update
- make kernel

You can also just create an image, follow the guides on
https://github.com/opnsense/tools this might be easier

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #17 from vistalba  ---
(In reply to Vincenzo Maffione from comment #14)

Is there a easy way to test this on my opnsense vm with vmx interfaces. As far
as I know my netmap issue on vmx is related to this timer issue as well.
I'm not so familiar with freebsd.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #16 from Sylvain Galliano  ---
(In reply to Vincenzo Maffione from comment #14)

Here are the results:

X520 with 1 queue
ix0: PCI Express Bus: Speed 5.0GT/s Width x8
ix0: netmap queues/slots: TX 1/2048, RX 1/2048

***

sysctl dev.ix.0.iflib.nm_tx_tmr_us=0  (default value)

pkt-gen:
683.502433 main_thread [2639] 4.215 Mpps (4.227 Mpkts 2.023 Gbps in 1002819
usec) 465.43 avg_batch 9 min_space

***

sysctl dev.ix.0.iflib.nm_tx_tmr_us=300

pkt-gen:
750.688608 main_thread [2639] 6.496 Mpps (6.646 Mpkts 3.118 Gbps in 1023000
usec) 465.45 avg_batch 9 min_space

***

sysctl dev.ix.0.iflib.nm_tx_tmr_us=200

pkt-gen:
771.736855 main_thread [2639] 8.907 Mpps (9.112 Mpkts 4.275 Gbps in 1022999
usec) 465.45 avg_batch 9 min_space

***

sysctl dev.ix.0.iflib.nm_tx_tmr_us=100

pkt-gen:
804.554603 main_thread [2639] 14.136 Mpps (14.147 Mpkts 6.785 Gbps in 1000748
usec) 465.45 avg_batch 9 min_space
-> close to 10G line rate

***

sysctl dev.ix.0.iflib.nm_tx_tmr_us=90

pkt-gen:
872.156329 main_thread [2639] 14.880 Mpps (15.054 Mpkts 7.142 Gbps in 1011721
usec) 466.96 avg_batch 9 min_space



Now using same X520 NIC using 4 queues.

ix1: PCI Express Bus: Speed 5.0GT/s Width x8
ix1: netmap queues/slots: TX 4/2048, RX 4/2048

***

sysctl dev.ix.1.iflib.nm_tx_tmr_us=0 (default)

pkt-gen:
047.988586 main_thread [2639] 13.596 Mpps (13.623 Mpkts 6.526 Gbps in 1002002
usec) 443.03 avg_batch 9 min_space
-> close to max speed (thanks to 4 queue)

***

sysctl dev.ix.1.iflib.nm_tx_tmr_us=400

pkt-gen:
094.224581 main_thread [2639] 14.887 Mpps (14.904 Mpkts 7.146 Gbps in 1001173
usec) 440.75 avg_batch 9 min_space


Looks really good for X520 NIC whatever the number of queue I used.



Now same tests using XL710 NIC (40G) using 1 queue:

ixl1: PCI Express Bus: Speed 8.0GT/s Width x8
ixl1: netmap queues/slots: TX 1/1024, RX 1/1024

***

sysctl dev.ixl.1.iflib.nm_tx_tmr_us=0 (default)

pkt-gen:
324.883066 main_thread [2639] 12.270 Mpps (13.044 Mpkts 5.890 Gbps in 1063000
usec) 16.53 avg_batch 9 min_space

***

sysctl dev.ixl.1.iflib.nm_tx_tmr_us=100

pkt-gen:
350.497566 main_thread [2639] 12.246 Mpps (12.258 Mpkts 5.878 Gbps in 1001003
usec) 16.48 avg_batch 9 min_space

no changes.


Now testing XL710 with 4 queues:

ixl0: PCI Express Bus: Speed 8.0GT/s Width x8
ixl0: netmap queues/slots: TX 4/1024, RX 4/1024

***

sysctl dev.ixl.0.iflib.nm_tx_tmr_us=0 (default)

pkt-gen:
614.766048 main_thread [2639] 13.671 Mpps (14.539 Mpkts 6.562 Gbps in 1063494
usec) 15.75 avg_batch 9 min_space

***

sysctl dev.ixl.0.iflib.nm_tx_tmr_us=100

pkt-gen:
640.652549 main_thread [2639] 13.672 Mpps (13.795 Mpkts 6.562 Gbps in 1009001
usec) 15.79 avg_batch 9 min_space


No changes using XL710 NIC (as a reminder, using FreeBSD 11 without iflib, I
can reach +40Mpps on XL710 using pkt-gen)

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] iflib: netmap pkt-gen large TX performance difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-13 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

Kubilay Kocak  changed:

   What|Removed |Added

Version|CURRENT |12.0-STABLE
   Severity|Affects Only Me |Affects Some People
Summary|netmap: pkt-gen TX huge pps |iflib: netmap pkt-gen large
   |difference between  |TX performance difference
   |11-STABLE and   |between 11-STABLE and
   |12-STABLE/CURRENT on ix &   |12-STABLE/CURRENT on ix &
   |ixl NIC |ixl NIC

--- Comment #15 from Kubilay Kocak  ---
^Triage: Switch Version to earliest affected version/branch

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-13 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

Vincenzo Maffione  changed:

   What|Removed |Added

   Assignee|n...@freebsd.org |vmaffi...@freebsd.org

--- Comment #14 from Vincenzo Maffione  ---
Created attachment 218723
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=218723=edit
Draft patch to test the netmap tx timer

This is a draft patch that adds support for a per-tx-queue timer dedicated to
netmap.
The timer interval is still not adaptive, but controlled by a per-interface
sysctl, e.g.:

  sysctl dev.ix.0.iflib.nm_tx_tmr_us=500

It would be useful to test pkt-gen transmission on ixl/ix NICs, playing on the
tunable to hopefully see the pps increase.
Values too large should cause the pps to drop. Values too short should cause
the CPU utilization to go up (and possibly the pps to drop a little bit).

Can anyone test this?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-11 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #13 from Vincenzo Maffione  ---
(In reply to vistalba from comment #12)
I started to work on it, however I've no suitable hardware to test.

This means that I will need to patch qemu to modify the emulation of an
iflib-backed device with MSI-X interrupts (such as vmxnet3) in such a way that
I can reproduce the problem (e.g. by making the transmission asynchronous w.r.t
the register write that triggers it, like in real hardware).

I will for sure ask for tests on real hardware, but first I need to make some
basic experiments on my own.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-10-11 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

vistalba  changed:

   What|Removed |Added

 CC||regis...@kad-it.ch

--- Comment #12 from vistalba  ---
Is there any progress about this issue?
Unfortunately I'm blocked on old opnsense & sensei version because with 20.7
the performance is really bad (<300Mbit/s). With 20.1 I can reach wirespeed
(1GbE) without problems.

Let me know, if I/we can test something to help solve this.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-09-20 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #11 from Vincenzo Maffione  ---
(In reply to Eric Joyner from comment #10)

Not yet. I've been AFK for a couple of weeks. I should be able to work on it
this week.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-09-15 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

Eric Joyner  changed:

   What|Removed |Added

 CC||e...@freebsd.org

--- Comment #10 from Eric Joyner  ---
(In reply to Vincenzo Maffione from comment #9)

Any update? This might be something to get into 12.2 if we can.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-31 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #9 from Vincenzo Maffione  ---
Thanks a lot for the tests.
I think the way netmap tx is handled right now needs improvement.

As far as I can tell, in your setup TX interrupts are simply not used (ix and
ixl seem to use softirq for TX interrupt processing).
Your experiments with increasing kern.hz cause the interrupt rate of the OS
timer to increase, and therefore causing the iflib_timer() routine to be called
more often. Being called more often, the TX ring is cleaned up (TX credits
update) more often and therefore the application can submit new TX packets more
often, hence the improved pps.

However, clearly increasing kern.hz is not a viable approach.
I think we should try to use a separate timer for netmap TX credits update,
using higher resolution (e.g. callout_reset_sbt_on()), and maybe try to
dynamically adjust the timer period to become smaller when transmitting at high
rate, and lower when transmitting ad low rate.
I'll try to come up with an experimental patch in the next days.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-28 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #8 from Sylvain Galliano  ---
After looking at iflib_netmap_timer_adjust() & iflib_netmap_txsync() in
sys/net/iflib.c,
I made some tuning on kern.hz:

Still using X520 with 1 queue
ix0: PCI Express Bus: Speed 5.0GT/s Width x8
ix0: netmap queues/slots: TX 1/2048, RX 1/2048

*

/boot/loader.conf:
kern.hz=1000  (default)

pkt-gen result:
204.153802 main_thread [2639] 2.562 Mpps (2.567 Mpkts 1.230 Gbps in 1001994
usec) 465.32 avg_batch 9 min_space
205.155321 main_thread [2639] 2.561 Mpps (2.565 Mpkts 1.229 Gbps in 1001519
usec) 465.45 avg_batch 9 min_space

5500 irq/s:

*

/boot/loader.conf:
kern.hz=1999

pkt-gen result:
41.375049 main_thread [2639] 5.117 Mpps (5.222 Mpkts 2.456 Gbps in 1020510
usec) 465.45 avg_batch 9 min_space
42.375546 main_thread [2639] 5.118 Mpps (5.121 Mpkts 2.457 Gbps in 1000497
usec) 465.42 avg_batch 9 min_space

11000 irq/s

X2 performance & irq/s

*

/boot/loader.conf:
kern.hz=2000

pkt-gen result:
797.608080 main_thread [2639] 2.560 Mpps (2.563 Mpkts 1.229 Gbps in 1001001
usec) 465.50 avg_batch 9 min_space
798.609079 main_thread [2639] 2.560 Mpps (2.563 Mpkts 1.229 Gbps in 1000999
usec) 465.41 avg_batch 9 min_space

5500 irq/s

Same performance & irq/s as kern.hz=1000 (due to limit at 2000 in
iflib_netmap_timer_adjust & iflib_netmap_txsync)

*

Last test, this one I forced 'ticks' parameter to '1' in callout_reset_on on
iflib_netmap_timer_adjust & iflib_netmap_txsync
by increasing the 2000 limit to 2 in both functions
and put an insame value for kern.hz

/boot/loader.conf:
kern.hz=1

pkt-gen result:
345.415939 main_thread [2639] 14.880 Mpps (14.890 Mpkts 7.142 Gbps in 1000699
usec) 430.97 avg_batch 9 min_space
346.429134 main_thread [2639] 14.880 Mpps (15.076 Mpkts 7.142 Gbps in 1013196
usec) 432.17 avg_batch 9 min_space

29000 irq/s

Same performance as FreeBSD 11

Looks like callout_reset_on to iflib_timer have a look high delay.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 230465] ixl: not working in netmap mode

2020-08-19 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230465

--- Comment #39 from Vincenzo Maffione  ---
(In reply to Kubilay Kocak from comment #38)

This is not a suricata issue. There was a suricata issue mentioned in this
thread, but it has been fixed upstream (suricata).

Comment #37 seems unrelated, since it mentions netmap with ixl in 12.x, where
iflib is in use. There was an iflib/netmap bug (see
https://reviews.freebsd.org/D25252) that may explain the problems briefly
mentioned in #37. But that is now in HEAD and stable/12.

This report is about a bug that apparently affects netmap TX over ixl in 11.x
(but not in 12.x and ahead).
This change
https://reviews.freebsd.org/D18984
does some cleanup but it does not fix the bug.
As you can see in the discussion, I reported the issue to the Intel developers,
but as far as I know there have been no changes on their side (in stable/11).
So I can assume that the bug is still there, and it's something that need the
Intel developers attention, if someone is still interested in netmap+ixl in
11.x

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-18 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #7 from Vincenzo Maffione  ---
(In reply to Kubilay Kocak from comment #6)
I would say
  ix/ixl and/or NIC driver & iflib
because it's not something related to the netmap module itself, and it is an
optimization which derives from ix/ixl netmap support code, which now is
included within iflib.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 230465] ixl: not working in netmap mode

2020-08-15 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230465

Kubilay Kocak  changed:

   What|Removed |Added

   See Also||https://github.com/OISF/sur
   ||icata/pull/3616
   Assignee|n...@freebsd.org |vmaffi...@freebsd.org

--- Comment #38 from Kubilay Kocak  ---
^Triage: assign to committer (apparently) resolving

@Vincenzo What is the actual/remaining issues here and the change delta(s), if
any, to be made in order to resolve?

Is https://reviews.freebsd.org/D18984 still relevant (its closed) and related
to this issue?

Is this just a suricata issue?

Is comment 37 relevant, or unrelated?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 230465] ixl: not working in netmap mode

2020-08-15 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230465

Mark Linimon  changed:

   What|Removed |Added

   Assignee|b...@freebsd.org|n...@freebsd.org

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #6 from Kubilay Kocak  ---
(In reply to Vincenzo Maffione from comment #5)

Is this more specific/scoped to:

 - netmap & iflib, or 
 - ix/ixl and/or NIC driver & iflib, or
 - iflib framework (generally)

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #5 from Vincenzo Maffione  ---
Ok, thanks for the feedback. That means that the issue is that iflib is not
asking enough TX descriptor writebacks. Need for some investigation in the
iflib txsync routine.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #4 from Sylvain Galliano  ---
You're right, interrupt rate limit pps on CURRENT + X520 NIC:

pkt-gen, no busy wait

11-stable:  27500 irq/s
CURRENT:5500  irq/s

pkt-gen, with busy wait on CURRENT: +3 irq/s

Regarding NIC irq tunable, the only one related to 'ix' looks good:

# sysctl hw.ix.max_interrupt_rate
hw.ix.max_interrupt_rate: 31250

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #3 from Vincenzo Maffione  ---
It looks like you get 2.6 Mpps because you are not getting enough interrupts...
have you tried to measure the interrupt rate in the two cases (current vs 11,
no busy wait)?
Intel NICs have tunables to set interrupt coalescing, for both TX and RX. Maybe
playing with those changes the game?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

Kubilay Kocak  changed:

   What|Removed |Added

   Keywords||iflib
  Flags||maintainer-feedback?(freebs
   ||d...@intel.com)
 CC||free...@intel.com

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #2 from Sylvain Galliano  ---
Hi Vincenzo, thanks for your quick reply.

I've disabled all offloads in both 11-STABLE and CURRENT and I got the same
results.

I did another test that may help you:

I've recompiled pkt-gen on current after adding:
 #define BUSYWAIT

Testing NIC Intel X520, 1 queue configured

default pkt-gen:
696.194470 main_thread [2641] 2.560 Mpps (2.570 Mpkts 1.229 Gbps in 1004000
usec) 
465.45 avg_batch 9 min_space

with busywait:
733.764470 main_thread [2641] 14.881 Mpps (15.172 Mpkts 7.143 Gbps in 1019565
usec) 344.22 avg_batch 9 min_space

14Mpps, same as 11-STABLE

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

--- Comment #1 from Vincenzo Maffione  ---
Thanks for reporting.
What I can tell you for sure is that the difference is to be attributed to the
conversion of Intel drivers (em, ix, ixl) to iflib.
This impacted netmap because netmap support for iflib drivers (intel ones, vmx,
mgb, bnxt) is provided directly within the iflib core. IOW, no explicit netmap
code stays within the drivers.

I would say some physiological performance drop is to be expected, due to the
additional indirection introduced by iflib. However, the performance drop
should not be so large as reported in your experiments.
The 2.6 Mpps you get in the first comparison let me think that you may have
accidentally left ethernet flow control enabled, maybe?
Moreover, the last experiment is rather confusing, since you have actually a
performance improvement... this lets me think that maybe the configuration is
not 100% aligned between the two cases?

Have you tried to disable all the offloads? In 11-stable the driver-specific
netmap code does not program the offloads, whereas in CURRENT (and 12) the
iflib callbacks actually program the offloads also in case of netmap.

  # ifconfig ix0 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] netmap: pkt-gen TX huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

Kubilay Kocak  changed:

   What|Removed |Added

Summary|[netmap]: pkt-gen tx huge   |netmap: pkt-gen TX huge pps
   |pps difference between  |difference between
   |11-STABLE and   |11-STABLE and
   |12-STABLE/CURRENT on ix &   |12-STABLE/CURRENT on ix &
   |ixl NIC |ixl NIC

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 248652] [netmap]: pkt-gen tx huge pps difference between 11-STABLE and 12-STABLE/CURRENT on ix & ixl NIC

2020-08-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248652

Kubilay Kocak  changed:

   What|Removed |Added

 CC||n...@freebsd.org,
   ||vmaffi...@freebsd.org
  Flags||maintainer-feedback?(vmaffi
   ||o...@freebsd.org),
   ||mfc-stable12?,
   ||mfc-stable11-
   Keywords||needs-qa, performance,
   ||regression
 Status|New |Open
   Assignee|b...@freebsd.org|n...@freebsd.org

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 247647] if_vmx(4): Page fault when opening netmap port (IFLIB/DMA)

2020-08-10 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=247647

Kubilay Kocak  changed:

   What|Removed |Added

  Flags|mfc-stable12?   |mfc-stable12+
   See Also||https://bugs.freebsd.org/bu
   ||gzilla/show_bug.cgi?id=2484
   ||94

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 247647] if_vmx(4): Page fault when opening netmap port (IFLIB/DMA)

2020-08-10 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=247647

--- Comment #26 from commit-h...@freebsd.org ---
A commit references this bug:

Author: vmaffione
Date: Mon Aug 10 17:53:10 UTC 2020
New revision: 364085
URL: https://svnweb.freebsd.org/changeset/base/364085

Log:
  MFC r363378

  iflib: initialize netmap with the correct number of descriptors

  In case the network device has a RX or TX control queue, the correct
  number of TX/RX descriptors is contained in the second entry of the
  isc_ntxd (or isc_nrxd) array, rather than in the first entry.
  This case is correctly handled by iflib_device_register() and
  iflib_pseudo_register(), but not by iflib_netmap_attach().
  If the first entry is larger than the second, this can result in a
  panic. This change fixes the bug by introducing two helper functions
  that also lead to some code simplification.

  PR: 247647
  Differential Revision:  https://reviews.freebsd.org/D25541

Changes:
_U  stable/12/
  stable/12/sys/net/iflib.c

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 247647] if_vmx(4): Page fault when opening netmap port (IFLIB/DMA)

2020-08-06 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=247647

--- Comment #25 from Murat  ---
(In reply to Vincenzo Maffione from comment #24)

Thanks for the update. 

Yes, we've confirmed that this fix resolved kernel crash with vmx. 

I will try to test with ix(4) soon.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 247647] if_vmx(4): Page fault when opening netmap port (IFLIB/DMA)

2020-08-06 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=247647

--- Comment #24 from Vincenzo Maffione  ---
(In reply to Murat from comment #22)
It is possible that the same bug caused a crash also on ix(4).
Have you tested if the patch makes the bug go away?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 236584] netmap does not work with VLAN on em driver (was: ifconfig does not honor disabling vlanhwtag)

2020-08-05 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236584

--- Comment #10 from Murat  ---
Created attachment 217047
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=217047=edit
dmesg.boot file for the test PC

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 236584] netmap does not work with VLAN on em driver (was: ifconfig does not honor disabling vlanhwtag)

2020-08-05 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236584

--- Comment #9 from Murat  ---
Created attachment 217046
  --> https://bugs.freebsd.org/bugzilla/attachment.cgi?id=217046=edit
em vlanhwtag patch tests

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 236584] netmap does not work with VLAN on em driver (was: ifconfig does not honor disabling vlanhwtag)

2020-08-05 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236584

--- Comment #8 from Murat  ---
Hi Vincenzo,

Sorry, my bad. Working with different source bases now, I had a look at another
directory.

I've done some tests on an HP Compaq 6305 SFF Desktop. 

Test results and HW configuration is attached. 

Please feel free to ask, if you need more tests.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 247647] if_vmx(4): Page fault when opening netmap port (IFLIB/DMA)

2020-08-05 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=247647

--- Comment #23 from Murat  ---
Follow-up ticket: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=248494

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 247647] if_vmx(4): Page fault when opening netmap port (IFLIB/DMA)

2020-08-05 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=247647

--- Comment #22 from Murat  ---
Vincenzo, thanks. I'll create another ticket for that. 

As for the kernel crash, any possibility that this bug was also affecting
ix(4). We saw a similar crash with that one too.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 247647] if_vmx(4): Page fault when opening netmap port (IFLIB/DMA)

2020-08-05 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=247647

--- Comment #21 from Vincenzo Maffione  ---
I'm pretty sure what you see is the effect of a separate bug.

It's a regression introduced by porting netmap to iflib. It shows up with those
devices that use multiple "free lists" on the receive side. (e.g. isc_nfl > 1).
vmx is one of those devices.
I still have to find the time to look into that (e.g., figure out how these
free lists actually work).

Please feel free to open a separate bug, since this one is about the kernel
crash, and it's fixed.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 236584] netmap does not work with VLAN on em driver (was: ifconfig does not honor disabling vlanhwtag)

2020-08-05 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236584

--- Comment #7 from Vincenzo Maffione  ---
(In reply to Murat from comment #6)
I just checked in the code, and I don't see the change in the stable/12 tree,
nor in HEAD.

Why did you say it has been integrated already?

If however you can do some tests on real em hardware, to check the patch does
not introduce any regression, independently on netmap, I will gladly merge this
into HEAD.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 247647] if_vmx(4): Page fault when opening netmap port (IFLIB/DMA)

2020-08-04 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=247647

--- Comment #20 from Zhenlei Huang  ---
Hi Murat,

Sorry I'm not familiar with such bridge configuration. It seems to be a
different issue that masked by this one. CC Vincenzo Maffione.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


  1   2   3   4   5   6   7   8   9   10   >