Re: No free hugepages reported

2024-04-02 Thread Lokesh Chakka
hi,

To add more information, the server I'm using has two CPU sockets and two
NUMA nodes. One is numbered as Node 2 and other one as Node 6
one more observation is the following command is getting executed
successfully.
$ sudo dpdk-hugepages.py -p 1G --setup 2G -n 2
also the following one
$ sudo dpdk-hugepages.py -p 1G --setup 2G -n 6
After executing the first command, 1G huge pages are getting created. After
executing the second command, huge pages under node 2 are getting deleted.

Following is the output of dpdk-testpmd command


EAL: Detected CPU lcores: 128
EAL: Detected NUMA nodes: 8
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No free 1048576 kB hugepages reported on node 0
EAL: No free 1048576 kB hugepages reported on node 1
EAL: No free 1048576 kB hugepages reported on node 3
EAL: No free 1048576 kB hugepages reported on node 4
EAL: No free 1048576 kB hugepages reported on node 5
EAL: No free 1048576 kB hugepages reported on node 6
EAL: No free 1048576 kB hugepages reported on node 7
EAL: VFIO support initialized
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
EAL: Using IOMMU type 1 (Type 1)
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
EAL: Probe PCI driver: net_ice (8086:1592) device: :63:00.0 (socket 3)
set_mempolicy: Invalid argument
PANIC in eth_dev_shared_data_prepare():
Cannot allocate ethdev shared data
0: /lib/x86_64-linux-gnu/librte_eal.so.23 (rte_dump_stack+0x41)
[7dbb0fe000b1]
1: /lib/x86_64-linux-gnu/librte_eal.so.23 (__rte_panic+0xc1) [7dbb0fde11c7]
2: /lib/x86_64-linux-gnu/librte_ethdev.so.23 (7dbb0fedb000+0x8b16)
[7dbb0fee3b16]
3: /lib/x86_64-linux-gnu/librte_ethdev.so.23 (rte_eth_dev_allocate+0x31)
[7dbb0feef971]
4: /usr/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_net_ice.so.23
(7dbb0f70e000+0x67465) [7dbb0f775465]
5: /usr/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_bus_pci.so.23
(7dbb0fc64000+0x4c76) [7dbb0fc68c76]
6: /usr/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_bus_pci.so.23
(7dbb0fc64000+0x8af4) [7dbb0fc6caf4]
7: /lib/x86_64-linux-gnu/librte_eal.so.23 (rte_bus_probe+0x23)
[7dbb0fdeeab3]
8: /lib/x86_64-linux-gnu/librte_eal.so.23 (7dbb0fdd4000+0x123bf)
[7dbb0fde63bf]
9: dpdk-testpmd (5813e0022000+0x45150) [5813e0067150]
10: /lib/x86_64-linux-gnu/libc.so.6 (7dbb0ec0+0x28150) [7dbb0ec28150]
11: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0x89) [7dbb0ec28209]
12: dpdk-testpmd (5813e0022000+0x48e55) [5813e006ae55]
Aborted





Thanks & Regards
--
Lokesh Chakka.


On Mon, Apr 1, 2024 at 2:50 AM Lokesh Chakka 
wrote:

> hi Stephen,
>
> Thanks for the reply. Following is the observation...
>
> *
> $ dpdk-hugepages.py -s
> Node Pages Size Total
> 2512   2Mb1Gb
> 6512   2Mb1Gb
>
> Hugepages mounted on /dev/hugepages /mnt/huge
>
> $ sudo dpdk-hugepages.py -p 1G --setup 2G
> Unable to set pages (0 instead of 2 in
> /sys/devices/system/node/node4/hugepages/hugepages-1048576kB/nr_hugepages).
> *********
>
>
> Regards
> --
> Lokesh Chakka.
>
>
> On Mon, Apr 1, 2024 at 12:36 AM Stephen Hemminger <
> step...@networkplumber.org> wrote:
>
>> On Sun, 31 Mar 2024 16:28:19 +0530
>> Lokesh Chakka  wrote:
>>
>> > Hello,
>> >
>> > I've installed dpdk in Ubuntu 23.10 with the command "sudo apt -y
>> install
>> > dpdk*"
>> >
>> > added  "nodev /mnt/huge hugetlbfs pagesize=1GB 0 0" in /etc/fstab
>> > added "vm.nr_hugepages=1024" in /etc/sysctl.conf
>> >
>> > rebooted the machine and then did devbind using the following command:
>> >
>> > sudo modprobe vfio-pci && sudo dpdk-devbind.py --bind=vfio-pci 63:00.0
>> > 63:00.1
>> >
>> > Huge page info is as follows :
>> >
>> > *
>> > $ cat /proc/meminfo | grep Huge
>> > AnonHugePages:  6144 kB
>> > ShmemHugePages:0 kB
>> > FileHugePages: 0 kB
>> > HugePages_Total:1024
>> > HugePages_Free: 1023
>> > HugePages_Rsvd:0
>> > HugePages_Surp:0
>> > Hugepagesize:   2048 kB
>> > Hugetlb: 2097152 kB
>> > *
>>
>> Your hugepages are not setup correctly. The mount is for 1G pages
>> and the sysctl entry makes 2M pages.
>>
>> Did you try using the dpdk-hugepages script?
>>
>


Re: No free hugepages reported

2024-03-31 Thread Lokesh Chakka
hi Stephen,

Thanks for the reply. Following is the observation...

*
$ dpdk-hugepages.py -s
Node Pages Size Total
2512   2Mb1Gb
6512   2Mb1Gb

Hugepages mounted on /dev/hugepages /mnt/huge

$ sudo dpdk-hugepages.py -p 1G --setup 2G
Unable to set pages (0 instead of 2 in
/sys/devices/system/node/node4/hugepages/hugepages-1048576kB/nr_hugepages).
*


Regards
--
Lokesh Chakka.


On Mon, Apr 1, 2024 at 12:36 AM Stephen Hemminger <
step...@networkplumber.org> wrote:

> On Sun, 31 Mar 2024 16:28:19 +0530
> Lokesh Chakka  wrote:
>
> > Hello,
> >
> > I've installed dpdk in Ubuntu 23.10 with the command "sudo apt -y install
> > dpdk*"
> >
> > added  "nodev /mnt/huge hugetlbfs pagesize=1GB 0 0" in /etc/fstab
> > added "vm.nr_hugepages=1024" in /etc/sysctl.conf
> >
> > rebooted the machine and then did devbind using the following command:
> >
> > sudo modprobe vfio-pci && sudo dpdk-devbind.py --bind=vfio-pci 63:00.0
> > 63:00.1
> >
> > Huge page info is as follows :
> >
> > *
> > $ cat /proc/meminfo | grep Huge
> > AnonHugePages:  6144 kB
> > ShmemHugePages:0 kB
> > FileHugePages: 0 kB
> > HugePages_Total:1024
> > HugePages_Free: 1023
> > HugePages_Rsvd:0
> > HugePages_Surp:0
> > Hugepagesize:   2048 kB
> > Hugetlb: 2097152 kB
> > *
>
> Your hugepages are not setup correctly. The mount is for 1G pages
> and the sysctl entry makes 2M pages.
>
> Did you try using the dpdk-hugepages script?
>


No free hugepages reported

2024-03-31 Thread Lokesh Chakka
Hello,

I've installed dpdk in Ubuntu 23.10 with the command "sudo apt -y install
dpdk*"

added  "nodev /mnt/huge hugetlbfs pagesize=1GB 0 0" in /etc/fstab
added "vm.nr_hugepages=1024" in /etc/sysctl.conf

rebooted the machine and then did devbind using the following command:

sudo modprobe vfio-pci && sudo dpdk-devbind.py --bind=vfio-pci 63:00.0
63:00.1

Huge page info is as follows :

*
$ cat /proc/meminfo | grep Huge
AnonHugePages:  6144 kB
ShmemHugePages:0 kB
FileHugePages: 0 kB
HugePages_Total:1024
HugePages_Free: 1023
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
Hugetlb: 2097152 kB
*

output of "dpdk-devbind.py -s" is as follows :

*

Network devices using DPDK-compatible driver

:63:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=vfio-pci
unused=ice
:63:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=vfio-pci
unused=ice

*

I am seeing the following error while I try to run dpdk-test

*
$ sudo dpdk-test
EAL: Detected CPU lcores: 128
EAL: Detected NUMA nodes: 8
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No free 2048 kB hugepages reported on node 0
EAL: No free 2048 kB hugepages reported on node 1
EAL: No free 2048 kB hugepages reported on node 3
EAL: No free 2048 kB hugepages reported on node 4
EAL: No free 2048 kB hugepages reported on node 5
EAL: No free 2048 kB hugepages reported on node 7
EAL: No free 1048576 kB hugepages reported on node 0
EAL: No free 1048576 kB hugepages reported on node 1
EAL: No free 1048576 kB hugepages reported on node 2
EAL: No free 1048576 kB hugepages reported on node 3
EAL: No free 1048576 kB hugepages reported on node 4
EAL: No free 1048576 kB hugepages reported on node 5
EAL: No free 1048576 kB hugepages reported on node 6
EAL: No free 1048576 kB hugepages reported on node 7
EAL: VFIO support initialized
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
EAL: Using IOMMU type 1 (Type 1)
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
EAL: Probe PCI driver: net_ice (8086:1592) device: :63:00.0 (socket 3)
set_mempolicy: Invalid argument
set_mempolicy: Invalid argument
PANIC in eth_dev_shared_data_prepare():
Cannot allocate ethdev shared data
0: /lib/x86_64-linux-gnu/librte_eal.so.23 (rte_dump_stack+0x41)
[788e7385d0b1]
1: /lib/x86_64-linux-gnu/librte_eal.so.23 (__rte_panic+0xc1) [788e7383e1c7]
2: /lib/x86_64-linux-gnu/librte_ethdev.so.23 (788e736f5000+0x8b16)
[788e736fdb16]
3: /lib/x86_64-linux-gnu/librte_ethdev.so.23 (rte_eth_dev_allocate+0x31)
[788e73709971]
4: /usr/lib/x86_64-linux-gnu/dpdk/pmds-23.0/librte_net_ice.so.23.0
(788e705d1000+0x67465) [788e70638465]
5: /lib/x86_64-linux-gnu/librte_bus_pci.so.23 (788e72fcc000+0x4c76)
[788e72fd0c76]
6: /lib/x86_64-linux-gnu/librte_bus_pci.so.23 (788e72fcc000+0x8af4)
[788e72fd4af4]
7: /lib/x86_64-linux-gnu/librte_eal.so.23 (rte_bus_probe+0x23)
[788e7384bab3]
8: /lib/x86_64-linux-gnu/librte_eal.so.23 (788e73831000+0x123bf)
[788e738433bf]
9: dpdk-test (59eca0915000+0x6c9e7) [59eca09819e7]
10: /lib/x86_64-linux-gnu/libc.so.6 (788e72c0+0x28150) [788e72c28150]
11: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0x89) [788e72c28209]
12: dpdk-test (59eca0915000+0x6ee85) [59eca0983e85]
Aborted
*

Can someone help me identify the issue please


Thanks & Regards
--
Lokesh Chakka.


Re: skeleton code failing

2022-08-25 Thread Lokesh Chakka
Hello,

As I am not able to get sufficient support from the intel support team or
from the dpdk forum, the support team from intel asked me to raise a ticket
in Intel Premium Support. Can someone help me get registered there as it is
asking for an intel agent.

Thanks & Regards
--
Lokesh Chakka.


On Wed, Jul 13, 2022 at 12:56 PM Lokesh Chakka <
lvenkatakumarcha...@gmail.com> wrote:

> Dear David,
>
> following is some more stuff i did
> ==
> $ sudo dpdk-devbind.py -b vfio-pci 83:00.0
> $ sudo dpdk-devbind.py -b vfio-pci 83:00.1
> $ sudo dpdk-devbind.py -b uio_pci_generic 83:00.0 83:00.1
> Error: Driver 'uio_pci_generic' is not loaded.
> $ sudo dpdk-devbind.py -b igb_uio 83:00.0 83:00.1
> Error: Driver 'igb_uio' is not loaded.
> $ sudo dpdk-devbind.py -b vfio-pci 83:00.0 83:00.1
> Notice: :83:00.0 already bound to driver vfio-pci, skipping
> Notice: :83:00.1 already bound to driver vfio-pci, skipping
> ==
> ~/Desktop/dpdk_examples/skeleton$ gcc main.c -g `pkg-config --cflags
> libdpdk --libs libdpdk`
> lokesh@lokesh-ProLiant-DL385-Gen10:~/Desktop/dpdk_examples/skeleton$ sudo
> ./a.out
> EAL: Detected CPU lcores: 64
> EAL: Detected NUMA nodes: 4
> EAL: Detected shared linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: No available 1048576 kB hugepages reported
> EAL: VFIO support initialized
> EAL: Using IOMMU type 1 (Type 1)
> EAL: Probe PCI driver: net_bnxt (14e4:1750) device: :83:00.0 (socket 2)
> EAL: Probe PCI driver: net_bnxt (14e4:1750) device: :83:00.1 (socket 2)
> TELEMETRY: No legacy callbacks, legacy socket not created
> Port 0 MAC: bc 97 e1 ce 84 f0
> Port 1 MAC: bc 97 e1 ce 84 f1
>
> WARNING: Too many lcores enabled. Only 1 used.
> WARNING, port 0 is on remote NUMA node to polling thread.
> Performance will not be optimal.
> WARNING, port 1 is on remote NUMA node to polling thread.
> Performance will not be optimal.
>
> Core 0 forwarding packets. [Ctrl+C to quit]
> ^C
> ==
> After a few seconds, I presses ctrl+c
>
> surprisingly cards are not showing up even in ifconfig.
>
>
>
> Thanks & Regards
> --
> Lokesh Chakka.
>
>
> On Wed, Jul 13, 2022 at 12:43 PM Lokesh Chakka <
> lvenkatakumarcha...@gmail.com> wrote:
>
>> Dear David,
>>
>> =
>> $ lspci | grep -i broadcom
>> 83:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57508
>> NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)
>> 83:00.1 Ethernet controller: Broadcom Inc. and subsidiaries BCM57508
>> NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)
>> $ lspci -n -s 83:00.0
>> 83:00.0 0200: 14e4:1750 (rev 11)
>> =
>>
>> I am compiling my code like this :
>> =
>> gcc main.c `pkg-config --cflags libdpdk --libs libdpdk`
>> =
>>
>> Hence it is statically linked code.
>> If I try
>> $ dpdk-pmdinfo.py ./a.out
>>
>> But I am not seeing any output
>>
>>
>>
>> Thanks & Regards
>> --
>> Lokesh Chakka.
>>
>>
>> On Wed, Jul 13, 2022 at 12:22 PM David Marchand <
>> david.march...@redhat.com> wrote:
>>
>>> On Wed, Jul 13, 2022 at 7:35 AM Lokesh Chakka
>>>  wrote:
>>> > Would like to understand if I am missing something. I am new to this
>>> platform.
>>> > rte_eth_dev_count_avail is returning zero.
>>> > OS is Ubuntu 22.04. DPDK is latest version.
>>> > Cards are being detected by Linux. Ifconfig is showing the cards up.
>>> LED is also glowing.
>>>
>>> Indeed, DPDK provides a userspace driver for some NetXtreme nics
>>> (which is net/bnxt).
>>> This userspace driver does not rely on the bnxt Linux kernel driver.
>>> IOW, this card being detecting and working with the Linux kernel does
>>> not automatically mean that this nic can work with DPDK.
>>>
>>> We need more info on your nic, first.
>>>
>>> Can you share the pci id of this nic (like running lspci -n -s
>>> $pci_address)?
>>> It should be a 14e4:.
>>>
>>> Then you can check this  against what your dpdk application supports.
>>>
>>> If it is a statically linked application, you can run:
>>> $ dpdk-pmdinfo.py /path/to/your/application
&

Re: DPDK 22.03 substantially slower with Intel E810-C

2022-08-04 Thread Lokesh Chakka
You are lucky. At least it is working. The E810-2CQDA2 Intel card I have
purchased is shutting down immediately after devbind. Even the technical
support is pathetic. It is one week I raised the issue and no progressive
response till now.

--
Lokesh


On Thu, Aug 4, 2022, 15:52 Filip Janiszewski 
wrote:

> Hello,
>
> DPDK 22.03 contains quite some changed to the ICE driver and the
> implementation for the Intel E810-C card, I'm running some tests and
> while switching to this new version from 21.02 I see a degradation of
> performance of around 30% using 4 capture core with 40Gbps rate (64bytes
> frame), packets are random and evenly distributed among the cores.
>
> Leaving unchanged all the other factors and only updating the library I
> start observing substantial packet drops, using igb_uio and firmware
> version 3.00 0x80008256 1.2992.0, I do not see any drop with 21.02.
>
> Is there any novel configuration to be made during initialization of the
> card or something like that? Anyone else is observing performance drop
> with the new dpdk?
>
> Thanks
>
> --
> BR, Filip
> +48 666 369 823
>


Re: intel 100 Gbps Network card shutting down after enabling dpdk

2022-08-02 Thread Lokesh Chakka
Dear Harry,

Concern is not static linking or dynamic linking. Concern is Card is
shutting down immediately after devbind. This, I am able to clearly see as
the green LEDs are turning immediately off after executing devbind.
However, I have executed and the following is the observation.
+
$ lspci | grep -i intel
c3:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C
for QSFP (rev 02)
c3:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C
for QSFP (rev 02)
$ sudo dpdk-devbind.py -b vfio-pci c3:00.0 c3:00.1
$ cat one.c
#include 
#include 

int main(int argc, char *argv[])
{
unsigned nb_ports;

int ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Error with EAL initialization\n");

nb_ports = rte_eth_dev_count_avail();
fprintf( stderr, "%s %d nb_ports: %u\n", __func__, __LINE__, nb_ports );
}
$ gcc one.c `pkg-config --cflags libdpdk --libs --static libdpdk`
$ sudo ./a.out
EAL: Detected CPU lcores: 64
EAL: Detected NUMA nodes: 4
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 1048576 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ice (8086:1592) device: :c3:00.0 (socket 3)
set_mempolicy: Invalid argument
EAL: Releasing PCI mapped resource for :c3:00.0
EAL: Calling pci_unmap_resource for :c3:00.0 at 0x410200
EAL: Calling pci_unmap_resource for :c3:00.0 at 0x410400
EAL: Requested device :c3:00.0 cannot be used
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ice (8086:1592) device: :c3:00.1 (socket 3)
set_mempolicy: Invalid argument
EAL: Releasing PCI mapped resource for :c3:00.1
EAL: Calling pci_unmap_resource for :c3:00.1 at 0x410401
EAL: Calling pci_unmap_resource for :c3:00.1 at 0x410601
EAL: Requested device :c3:00.1 cannot be used
main 20
main 23 nb_ports: 0

+

Please let me know if any more information is required.


Thanks & Regards
--
Lokesh Chakka.


On Tue, Aug 2, 2022 at 2:49 PM Van Haaren, Harry 
wrote:

> [Top posting, as HTML email]
>
>
>
> HI Chakka,
>
>
>
> When linking against DPDK in Shared-library mode, the .so files to handle
> drivers are not automatically loaded;
>
>
> https://doc.dpdk.org/guides-20.05/linux_gsg/linux_eal_parameters.html#linux-specific-eal-parameters
>
>
>
> Notice the -d flag in particular: "-d  directory>", when used and pointed to the correct .so for the ice pmd .so
> file,
>
> DPDK will be able to initialize the NIC.
>
>
>
> As a workaround/alternative, static-linking against DPDK will avoid the
> requirement to use "-d" EAL arg to load the PCI device .so drivers.
>
>
>
> Hope that helps, -Harry
>
>
>
>
>
> *From:* Lokesh Chakka 
> *Sent:* Monday, August 1, 2022 9:20 AM
> *To:* Tkachuk, Georgii 
> *Cc:* users 
> *Subject:* Re: intel 100 Gbps Network card shutting down after enabling
> dpdk
>
>
>
> After booting the server, the following are commands I have executed.
> Immediately after doing devbind -b, cards are going down. I am able to see
> that green leds are turning off.
>
>
>
> following is the two lines of code and output of code. number of ports are
> reporting as zero
>
>
>
> 
>
>
>
> $ lspci | grep -i intel
> c3:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C
> for QSFP (rev 02)
> c3:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C
> for QSFP (rev 02)
>
> $ sudo dpdk-devbind.py -b vfio-pci c3:00.0 c3:00.1
> $ sudo dpdk-devbind.py -s
>
> Network devices using DPDK-compatible driver
> 
> :c3:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=vfio-pci
> unused=ice
> :c3:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=vfio-pci
> unused=ice
>
> Network devices using kernel driver
> ===
> :05:00.0 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno1
> drv=tg3 unused=vfio-pci
> :05:00.1 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno2
> drv=tg3 unused=vfio-pci
> :05:00.2 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno3
> drv=tg3 unused=vfio-pci
> :05:00.3 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno4
> drv=tg3 unused=vfio-pci
> :83:00.0 'BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb
> Ethernet 1750' if=ens3f0np0 drv=bnxt_en unused=vfio-pci
> :83:00.1 'BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb
> Et

Re: intel 100 Gbps Network card shutting down after enabling dpdk

2022-08-01 Thread Lokesh Chakka
After booting the server, the following are commands I have executed.
Immediately after doing devbind -b, cards are going down. I am able to see
that green leds are turning off.

following is the two lines of code and output of code. number of ports are
reporting as zero



$ lspci | grep -i intel
c3:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C
for QSFP (rev 02)
c3:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C
for QSFP (rev 02)

$ sudo dpdk-devbind.py -b vfio-pci c3:00.0 c3:00.1
$ sudo dpdk-devbind.py -s

Network devices using DPDK-compatible driver

:c3:00.0 'Ethernet Controller E810-C for QSFP 1592' drv=vfio-pci
unused=ice
:c3:00.1 'Ethernet Controller E810-C for QSFP 1592' drv=vfio-pci
unused=ice

Network devices using kernel driver
===
:05:00.0 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno1 drv=tg3
unused=vfio-pci
:05:00.1 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno2 drv=tg3
unused=vfio-pci
:05:00.2 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno3 drv=tg3
unused=vfio-pci
:05:00.3 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno4 drv=tg3
unused=vfio-pci
:83:00.0 'BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet
1750' if=ens3f0np0 drv=bnxt_en unused=vfio-pci
:83:00.1 'BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet
1750' if=ens3f1np1 drv=bnxt_en unused=vfio-pci

No 'Baseband' devices detected
==

Crypto devices using kernel driver
==
:02:00.1 'Zeppelin Cryptographic Coprocessor NTBCCP 1468' drv=ccp
unused=vfio-pci
:03:00.2 'Family 17h (Models 00h-0fh) Platform Security Processor 1456'
drv=ccp unused=vfio-pci
:41:00.1 'Zeppelin Cryptographic Coprocessor NTBCCP 1468' drv=ccp
unused=vfio-pci
:42:00.2 'Family 17h (Models 00h-0fh) Platform Security Processor 1456'
drv=ccp unused=vfio-pci
:81:00.1 'Zeppelin Cryptographic Coprocessor NTBCCP 1468' drv=ccp
unused=vfio-pci
:82:00.2 'Family 17h (Models 00h-0fh) Platform Security Processor 1456'
drv=ccp unused=vfio-pci
:c1:00.1 'Zeppelin Cryptographic Coprocessor NTBCCP 1468' drv=ccp
unused=vfio-pci
:c2:00.2 'Family 17h (Models 00h-0fh) Platform Security Processor 1456'
drv=ccp unused=vfio-pci

No 'DMA' devices detected
=

No 'Eventdev' devices detected
==

No 'Mempool' devices detected
=

No 'Compress' devices detected
==

No 'Misc (rawdev)' devices detected
===

No 'Regex' devices detected
===

$ cat one.c
#include 
#include 


int main(int argc, char *argv[])
{
struct rte_mempool *mbuf_pool;
unsigned nb_ports;
uint16_t portid;

/* Initializion the Environment Abstraction Layer (EAL). 8< */
int ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Error with EAL initialization\n");
/* >8 End of initialization the Environment Abstraction Layer (EAL). */

argc -= ret;
argv += ret;

fprintf( stderr, "%s %d\n", __func__, __LINE__ );
/* Check that there is an even number of ports to send/receive on. */
nb_ports = rte_eth_dev_count_avail();
fprintf( stderr, "%s %d nb_ports: %u\n", __func__, __LINE__, nb_ports );
}

$ gcc one.c `pkg-config --cflags libdpdk --libs libdpdk`
$ sudo ./a.out
EAL: Detected CPU lcores: 64
EAL: Detected NUMA nodes: 4
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 1048576 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ice (8086:1592) device: :c3:00.0 (socket 3)
set_mempolicy: Invalid argument
EAL: Releasing PCI mapped resource for :c3:00.0
EAL: Calling pci_unmap_resource for :c3:00.0 at 0x410200
EAL: Calling pci_unmap_resource for :c3:00.0 at 0x410400
EAL: Requested device :c3:00.0 cannot be used
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ice (8086:1592) device: :c3:00.1 (socket 3)
set_mempolicy: Invalid argument
EAL: Releasing PCI mapped resource for :c3:00.1
EAL: Calling pci_unmap_resource for :c3:00.1 at 0x410401
EAL: Calling pci_unmap_resource for :c3:00.1 at 0x410601
EAL: Requested device :c3:00.1 cannot be used
TELEMETRY: No legacy callbacks, legacy socket not created
main 20
main 23 nb_ports: 0

++++++++


Thanks & Regards
--
Lokesh Chakka.


On Mon, Aug 1, 2022 at 12:44 PM Tkachuk, Georgii 
wrote:

> Please check “sudo dpdk-devbind.py -s” to see if both ports are bound to
> vfio-pci
>
> And some questions:
>
>
>
> What DPDK app are you running?
>
>

intel 100 Gbps Network card shutting down after enabling dpdk

2022-08-01 Thread Lokesh Chakka
hello,

Recently I have bought one intel network card

https://ark.intel.com/content/www/us/en/ark/products/210969/intel-ethernet-network-adapter-e8102cqda2.html

The moment I do devbind, cards are shutting down.

$ lspci | grep -i intel
c3:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C
for QSFP (rev 02)
c3:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C
for QSFP (rev 02)

$ sudo dpdk-devbind.py -b vfio-pci c3:00.0 c3:00.1

After this, cards are going down. EAL is giving following error

EAL: Requested device :c3:00.1 cannot be used
EAL: Error - exiting with code: 1
Cause: Error: number of ports must be even

Did anyone face this issue or anyone know how to fix the issue?

Thanks & Regards
--
Lokesh Chakka.


Re: skeleton code failing (Lokesh Chakka)

2022-07-14 Thread Lokesh Chakka
Dear vipin,

Thanks a lot for your valuable assistance and observation. I can't
sacrifice performance.I will look into pdump.

Best Regards
--
Lokesh Chakka.


On Thu, Jul 14, 2022 at 1:03 PM Varghese, Vipin 
wrote:

> [Public]
>
>
>
> [LC] I am having is
> https://www.broadcom.com/products/ethernet-connectivity/network-adapters/p2100g
> <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.broadcom.com%2Fproducts%2Fethernet-connectivity%2Fnetwork-adapters%2Fp2100g=05%7C01%7CVipin.Varghese%40amd.com%7C255e189a10194e84dab008da656a11ed%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637933803507082842%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=GInrRYkg8qTsJ02CbjPSl%2FLT6C4cJ4Yt2fMkNfGmFho%3D=0>
> However, cards are up after waiting for around 10 minutes.
>
> [VV] I do face similar issue, hence I have recommended ` Observation: due 
> some hardware or firmware bug auto-neg take more time with the current DPDK 
> BXNT PMD. Hence my advice is try to wait longer for link state up in DPDK.`. 
> Please try reaching out to Broadcom PMD maintainer 
> `ajit.khapa...@broadcom.com`. Once the solution is identified please share too
>
>
>
>
>
> [LC] If netdevice is not available, how can I capture the packets using
> wireshark?
>
> [VV] Please explore DPDK tool PDUMP.
> https://doc.dpdk.org/guides/tools/pdump.html
>
>
>
> [LC] Interfaces are not visible to the wireshark. Very basic requirement
> is to send the packets, capture them and see the contents.
>
> [VV] If you want the kernel netdev visible and use it under DPDK, I
> recommend using LIBPCAP PMD by sacrificing performance and higher
> functionality.
>
>
>
> *From:* Lokesh Chakka 
> *Sent:* Thursday, July 14, 2022 12:56 PM
> *To:* Varghese, Vipin 
> *Cc:* users@dpdk.org; Yigit, Ferruh ; Tummala,
> Sivaprasad 
> *Subject:* Re: skeleton code failing (Lokesh Chakka)
>
>
>
> [CAUTION: External Email]
>
>
> The card I am having is
> https://www.broadcom.com/products/ethernet-connectivity/network-adapters/p2100g
> <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.broadcom.com%2Fproducts%2Fethernet-connectivity%2Fnetwork-adapters%2Fp2100g=05%7C01%7CVipin.Varghese%40amd.com%7C255e189a10194e84dab008da656a11ed%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637933803507082842%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=GInrRYkg8qTsJ02CbjPSl%2FLT6C4cJ4Yt2fMkNfGmFho%3D=0>
>
> However, cards are up after waiting for around 10 minutes.
>
>
>
> Thanks for the valuable input. skeleton code is running till I press
> ctrl+c.
>
> Now I have a big concern.
>
> If netdevice is not available, how can I capture the packets using
> wireshark?
>
> Interfaces are not visible to the wireshark. Very basic requirement is to
> send the packets, capture them and see the contents.
>
>
>
>
> Thanks & Regards
> --
> Lokesh Chakka.
>
>
>
>
>
> On Thu, Jul 14, 2022 at 11:46 AM Varghese, Vipin 
> wrote:
>
> [AMD Official Use Only - General]
>
>
>
> Is this not Broadcom extreme net card? Please refer
> http://doc.dpdk.org/guides/nics/bnxt.html
> <https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fbnxt.html=05%7C01%7CVipin.Varghese%40amd.com%7C255e189a10194e84dab008da656a11ed%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637933803507082842%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=qsqUSFyPvnm%2Frqqy5RkOfMu2WREnXXNPAoQNrBb09ig%3D=0>,
> once you bind with `igb_uio, vfio_pci or uio_pci_generic` all linux
> netdevice instance will be no longer available. This is because it is not a
> `port representator`
>
>
>
> Observation: due some hardware or firmware bug auto-neg take more time
> with the current DPDK BXNT PMD. Hence my advice is try to wait longer for
> link state up in DPDK.
>
> You can verify the same with testpmd.
>
>
>
> *From:* Lokesh Chakka 
> *Sent:* Thursday, July 14, 2022 11:11 AM
> *To:* Varghese, Vipin 
> *Cc:* users@dpdk.org; Yigit, Ferruh ; Tummala,
> Sivaprasad 
> *Subject:* Re: skeleton code failing (Lokesh Chakka)
>
>
>
> [CAUTION: External Email]
>
> I have one more observation here.
>
> After "$ sudo dpdk-devbind.py -b vfio-pci 83:00.0 83:00.1"
>
>
>
> Cards are shutting down. ifconfig is not showing the cards. reinsert of
> sfp is also not bringing the cards up. I had to restart the server. Not
> sure why the cards are going down.
>
>
>
> Thanks & Regards
> --
> Lokesh Chakka.
>
>
>
>
>
&g

Re: skeleton code failing (Lokesh Chakka)

2022-07-14 Thread Lokesh Chakka
The card I am having is
https://www.broadcom.com/products/ethernet-connectivity/network-adapters/p2100g
However, cards are up after waiting for around 10 minutes.

Thanks for the valuable input. skeleton code is running till I press ctrl+c.
Now I have a big concern.
If netdevice is not available, how can I capture the packets using
wireshark?
Interfaces are not visible to the wireshark. Very basic requirement is to
send the packets, capture them and see the contents.


Thanks & Regards
--
Lokesh Chakka.


On Thu, Jul 14, 2022 at 11:46 AM Varghese, Vipin 
wrote:

> [AMD Official Use Only - General]
>
>
>
> Is this not Broadcom extreme net card? Please refer
> http://doc.dpdk.org/guides/nics/bnxt.html, once you bind with `igb_uio,
> vfio_pci or uio_pci_generic` all linux netdevice instance will be no longer
> available. This is because it is not a `port representator`
>
>
>
> Observation: due some hardware or firmware bug auto-neg take more time
> with the current DPDK BXNT PMD. Hence my advice is try to wait longer for
> link state up in DPDK.
>
> You can verify the same with testpmd.
>
>
>
> *From:* Lokesh Chakka 
> *Sent:* Thursday, July 14, 2022 11:11 AM
> *To:* Varghese, Vipin 
> *Cc:* users@dpdk.org; Yigit, Ferruh ; Tummala,
> Sivaprasad 
> *Subject:* Re: skeleton code failing (Lokesh Chakka)
>
>
>
> [CAUTION: External Email]
>
> I have one more observation here.
>
> After "$ sudo dpdk-devbind.py -b vfio-pci 83:00.0 83:00.1"
>
>
>
> Cards are shutting down. ifconfig is not showing the cards. reinsert of
> sfp is also not bringing the cards up. I had to restart the server. Not
> sure why the cards are going down.
>
>
>
> Thanks & Regards
> --
> Lokesh Chakka.
>
>
>
>
>
> On Thu, Jul 14, 2022 at 8:50 AM Varghese, Vipin 
> wrote:
>
> [AMD Official Use Only - General]
>
> Based on the compilation command shared it looks like you are using the
> build in shared library mode ` gcc main.c -g `pkg-config --cflags libdpdk
> --libs libdpdk`
> Hence in EAL PCIe probe, the BNXT PMD is not triggered to identify the
> NIC.
>
> Solutions for these can be
> 1. Build with static library ` gcc main.c -g `pkg-config --cflags libdpdk
> --libs --static libdpdk`
> 2. pass the bxnt_en PMD shared library to eal args by ` sudo ./a.out -l 1
> -d librte_net_bnxt.so`
>
> Can you try any of the above ?
>
> > -Original Message-
> > From: users-requ...@dpdk.org 
> > Sent: Wednesday, July 13, 2022 3:30 PM
> > To: users@dpdk.org
> > Subject: users Digest, Vol 347, Issue 6
> >
> > [CAUTION: External Email]
> >
> > Send users mailing list submissions to
> > users@dpdk.org
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> >
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.dp
> <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.dp%2F=05%7C01%7CVipin.Varghese%40amd.com%7C4b323bb32c214497ef5e08da655b7693%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637933740761659125%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=kEjFmF8PaxwPyIXKcpB1B9VJjuLzzFqB5zl3Ahmd%2BAs%3D=0>
> > dk.org
> <https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdk.org%2F=05%7C01%7CVipin.Varghese%40amd.com%7C4b323bb32c214497ef5e08da655b7693%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637933740761659125%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=0zrwvV24mjHlR9qLiLsL9cUHePu%2B4l2mr9iTfFYQRyI%3D=0>
> %2Flistinfo%2Fusersdata=05%7C01%7Cvipin.varghese%40amd.co
> <https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2F40amd.co%2F=05%7C01%7CVipin.Varghese%40amd.com%7C4b323bb32c214497ef5e08da655b7693%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637933740761659125%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=rlZbwJ5OkgTUEgS2pPROwbYV9VhtuQDxSgrOrRIP9FQ%3D=0>
> > m%7C55465d0d4faa4cd8496a08da64b674f4%7C3dd8961fe4884e608e11a82d9
> > 94e183d%7C0%7C0%7C637933032067208770%7CUnknown%7CTWFpbGZsb3d8
> > eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D
> > %7C3000%7C%7C%7Csdata=6JHjf2Fu%2BkQ56CF9xTZvBS%2Bh8Uomlqv
> > E9dq9M7VefiU%3Dreserved=0
> > or, via email, send a message with subject or body 'help' to
> > users-requ...@dpdk.org
> >
> > You can reach the person managing the list at
> > users-ow...@dpdk.org
> >
> > When replying, please edit your Subject line so it is more specific than
> "Re:
> > Contents of users digest..."
> &

Re: skeleton code failing (Lokesh Chakka)

2022-07-13 Thread Lokesh Chakka
I have one more observation here.
After "$ sudo dpdk-devbind.py -b vfio-pci 83:00.0 83:00.1"

Cards are shutting down. ifconfig is not showing the cards. reinsert of sfp
is also not bringing the cards up. I had to restart the server. Not sure
why the cards are going down.

Thanks & Regards
--
Lokesh Chakka.


On Thu, Jul 14, 2022 at 8:50 AM Varghese, Vipin 
wrote:

> [AMD Official Use Only - General]
>
> Based on the compilation command shared it looks like you are using the
> build in shared library mode ` gcc main.c -g `pkg-config --cflags libdpdk
> --libs libdpdk`
> Hence in EAL PCIe probe, the BNXT PMD is not triggered to identify the
> NIC.
>
> Solutions for these can be
> 1. Build with static library ` gcc main.c -g `pkg-config --cflags libdpdk
> --libs --static libdpdk`
> 2. pass the bxnt_en PMD shared library to eal args by ` sudo ./a.out -l 1
> -d librte_net_bnxt.so`
>
> Can you try any of the above ?
>
> > -Original Message-
> > From: users-requ...@dpdk.org 
> > Sent: Wednesday, July 13, 2022 3:30 PM
> > To: users@dpdk.org
> > Subject: users Digest, Vol 347, Issue 6
> >
> > [CAUTION: External Email]
> >
> > Send users mailing list submissions to
> > users@dpdk.org
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> >
> >
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmails.dp
> > dk.org%2Flistinfo%2Fusersdata=05%7C01%7Cvipin.varghese%40amd.co
> > m%7C55465d0d4faa4cd8496a08da64b674f4%7C3dd8961fe4884e608e11a82d9
> > 94e183d%7C0%7C0%7C637933032067208770%7CUnknown%7CTWFpbGZsb3d8
> > eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D
> > %7C3000%7C%7C%7Csdata=6JHjf2Fu%2BkQ56CF9xTZvBS%2Bh8Uomlqv
> > E9dq9M7VefiU%3Dreserved=0
> > or, via email, send a message with subject or body 'help' to
> > users-requ...@dpdk.org
> >
> > You can reach the person managing the list at
> > users-ow...@dpdk.org
> >
> > When replying, please edit your Subject line so it is more specific than
> "Re:
> > Contents of users digest..."
> >
> >
> > Today's Topics:
> >
> >1. Re: skeleton code failing (Lokesh Chakka)
> >
> >
> > --
> >
> > Message: 1
> > Date: Wed, 13 Jul 2022 12:56:37 +0530
> > From: Lokesh Chakka 
> > To: David Marchand 
> > Cc: users 
> > Subject: Re: skeleton code failing
> > Message-ID:
> >  > nm_pwq59fy9qvnh+gbg...@mail.gmail.com>
> > Content-Type: text/plain; charset="utf-8"
> >
> > Dear David,
> >
> > following is some more stuff i did
> > ==
> > $ sudo dpdk-devbind.py -b vfio-pci 83:00.0 $ sudo dpdk-devbind.py -b
> vfio-pci
> > 83:00.1 $ sudo dpdk-devbind.py -b uio_pci_generic 83:00.0 83:00.1
> > Error: Driver 'uio_pci_generic' is not loaded.
> > $ sudo dpdk-devbind.py -b igb_uio 83:00.0 83:00.1
> > Error: Driver 'igb_uio' is not loaded.
> > $ sudo dpdk-devbind.py -b vfio-pci 83:00.0 83:00.1
> > Notice: :83:00.0 already bound to driver vfio-pci, skipping
> > Notice: :83:00.1 already bound to driver vfio-pci, skipping
> > ==
> > ~/Desktop/dpdk_examples/skeleton$ gcc main.c -g `pkg-config --cflags
> libdpdk
> > --libs libdpdk` lokesh@lokesh-ProLiant-DL385-
> > Gen10:~/Desktop/dpdk_examples/skeleton$ sudo ./a.out
> > EAL: Detected CPU lcores: 64
> > EAL: Detected NUMA nodes: 4
> > EAL: Detected shared linkage of DPDK
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: Selected IOVA mode 'VA'
> > EAL: No available 1048576 kB hugepages reported
> > EAL: VFIO support initialized
> > EAL: Using IOMMU type 1 (Type 1)
> > EAL: Probe PCI driver: net_bnxt (14e4:1750) device: :83:00.0 (socket
> 2)
> > EAL: Probe PCI driver: net_bnxt (14e4:1750) device: :83:00.1 (socket
> 2)
> > TELEMETRY: No legacy callbacks, legacy socket not created Port 0 MAC: bc
> 97
> > e1 ce 84 f0 Port 1 MAC: bc 97 e1 ce 84 f1
> >
> > WARNING: Too many lcores enabled. Only 1 used.
> > WARNING, port 0 is on remote NUMA node to polling thread.
> > Performance will not be optimal.
> > WARNING, port 1 is on remote NUMA node to polling thread.
> > Performance will not be optimal.
> >
> > Core 0 forwarding packets. [Ctrl+C to quit] ^C
> > ==
> > After a few seconds, I presses ctrl+c
> >
> &g

Re: skeleton code failing

2022-07-13 Thread Lokesh Chakka
Dear David,

following is some more stuff i did
==
$ sudo dpdk-devbind.py -b vfio-pci 83:00.0
$ sudo dpdk-devbind.py -b vfio-pci 83:00.1
$ sudo dpdk-devbind.py -b uio_pci_generic 83:00.0 83:00.1
Error: Driver 'uio_pci_generic' is not loaded.
$ sudo dpdk-devbind.py -b igb_uio 83:00.0 83:00.1
Error: Driver 'igb_uio' is not loaded.
$ sudo dpdk-devbind.py -b vfio-pci 83:00.0 83:00.1
Notice: :83:00.0 already bound to driver vfio-pci, skipping
Notice: :83:00.1 already bound to driver vfio-pci, skipping
==
~/Desktop/dpdk_examples/skeleton$ gcc main.c -g `pkg-config --cflags
libdpdk --libs libdpdk`
lokesh@lokesh-ProLiant-DL385-Gen10:~/Desktop/dpdk_examples/skeleton$ sudo
./a.out
EAL: Detected CPU lcores: 64
EAL: Detected NUMA nodes: 4
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 1048576 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_bnxt (14e4:1750) device: :83:00.0 (socket 2)
EAL: Probe PCI driver: net_bnxt (14e4:1750) device: :83:00.1 (socket 2)
TELEMETRY: No legacy callbacks, legacy socket not created
Port 0 MAC: bc 97 e1 ce 84 f0
Port 1 MAC: bc 97 e1 ce 84 f1

WARNING: Too many lcores enabled. Only 1 used.
WARNING, port 0 is on remote NUMA node to polling thread.
Performance will not be optimal.
WARNING, port 1 is on remote NUMA node to polling thread.
Performance will not be optimal.

Core 0 forwarding packets. [Ctrl+C to quit]
^C
==
After a few seconds, I presses ctrl+c

surprisingly cards are not showing up even in ifconfig.



Thanks & Regards
--
Lokesh Chakka.


On Wed, Jul 13, 2022 at 12:43 PM Lokesh Chakka <
lvenkatakumarcha...@gmail.com> wrote:

> Dear David,
>
> =
> $ lspci | grep -i broadcom
> 83:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57508
> NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)
> 83:00.1 Ethernet controller: Broadcom Inc. and subsidiaries BCM57508
> NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)
> $ lspci -n -s 83:00.0
> 83:00.0 0200: 14e4:1750 (rev 11)
> =
>
> I am compiling my code like this :
> =
> gcc main.c `pkg-config --cflags libdpdk --libs libdpdk`
> =
>
> Hence it is statically linked code.
> If I try
> $ dpdk-pmdinfo.py ./a.out
>
> But I am not seeing any output
>
>
>
> Thanks & Regards
> --
> Lokesh Chakka.
>
>
> On Wed, Jul 13, 2022 at 12:22 PM David Marchand 
> wrote:
>
>> On Wed, Jul 13, 2022 at 7:35 AM Lokesh Chakka
>>  wrote:
>> > Would like to understand if I am missing something. I am new to this
>> platform.
>> > rte_eth_dev_count_avail is returning zero.
>> > OS is Ubuntu 22.04. DPDK is latest version.
>> > Cards are being detected by Linux. Ifconfig is showing the cards up.
>> LED is also glowing.
>>
>> Indeed, DPDK provides a userspace driver for some NetXtreme nics
>> (which is net/bnxt).
>> This userspace driver does not rely on the bnxt Linux kernel driver.
>> IOW, this card being detecting and working with the Linux kernel does
>> not automatically mean that this nic can work with DPDK.
>>
>> We need more info on your nic, first.
>>
>> Can you share the pci id of this nic (like running lspci -n -s
>> $pci_address)?
>> It should be a 14e4:.
>>
>> Then you can check this  against what your dpdk application supports.
>>
>> If it is a statically linked application, you can run:
>> $ dpdk-pmdinfo.py /path/to/your/application
>>
>> Else, if your application is dynamically linked against DPDK driver,
>> you can run this command against the net/bnxt driver .so.22 (for 21.11
>> and later releases):
>> $ dpdk-pmdinfo.py /path/to/your/dpdk/drivers/librte_net_bnxt.so.22
>>
>> You should get a list of supported NetXtreme nics, like:
>>
>> [snipped some other drivers compiled in my application]
>> PMD NAME: net_bnxt
>> PMD HW SUPPORT:
>>  Broadcom Inc. and subsidiaries (14e4) : BCM5745X NetXtreme-E RDMA
>> Virtual Function (1606) (All Subdevices)
>>  Broadcom Inc. and subsidiaries (14e4) : BCM5745X NetXtreme-E Ethernet
>> Virtual Function (1609) (All Subdevices)
>>  Broadcom Inc. and subsidiaries (14e4) : BCM57454 NetXtreme-E
>> 10Gb/25Gb/40Gb/50Gb/100Gb Ethernet (1614) (All Subdevices)
>>  Broadcom Inc. and subsidiaries (14e4) : NetXtreme-E RDMA Virtual
>> Function (16c1) (All Subdevices)
>>  Broadcom Inc. and subsidiaries (14e4) : NetXtreme-C Ethernet Virtual
>> Function (16cb) (All Subdevices)
>> [snipped the rest]
>>
>> I hope you can find a () corresponding to your NIC.
>>
>>
>> --
>> David Marchand
>>
>>


Re: skeleton code failing

2022-07-13 Thread Lokesh Chakka
Dear David,

=
$ lspci | grep -i broadcom
83:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57508
NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)
83:00.1 Ethernet controller: Broadcom Inc. and subsidiaries BCM57508
NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb Ethernet (rev 11)
$ lspci -n -s 83:00.0
83:00.0 0200: 14e4:1750 (rev 11)
=

I am compiling my code like this :
=
gcc main.c `pkg-config --cflags libdpdk --libs libdpdk`
=

Hence it is statically linked code.
If I try
$ dpdk-pmdinfo.py ./a.out

But I am not seeing any output



Thanks & Regards
--
Lokesh Chakka.


On Wed, Jul 13, 2022 at 12:22 PM David Marchand 
wrote:

> On Wed, Jul 13, 2022 at 7:35 AM Lokesh Chakka
>  wrote:
> > Would like to understand if I am missing something. I am new to this
> platform.
> > rte_eth_dev_count_avail is returning zero.
> > OS is Ubuntu 22.04. DPDK is latest version.
> > Cards are being detected by Linux. Ifconfig is showing the cards up. LED
> is also glowing.
>
> Indeed, DPDK provides a userspace driver for some NetXtreme nics
> (which is net/bnxt).
> This userspace driver does not rely on the bnxt Linux kernel driver.
> IOW, this card being detecting and working with the Linux kernel does
> not automatically mean that this nic can work with DPDK.
>
> We need more info on your nic, first.
>
> Can you share the pci id of this nic (like running lspci -n -s
> $pci_address)?
> It should be a 14e4:.
>
> Then you can check this  against what your dpdk application supports.
>
> If it is a statically linked application, you can run:
> $ dpdk-pmdinfo.py /path/to/your/application
>
> Else, if your application is dynamically linked against DPDK driver,
> you can run this command against the net/bnxt driver .so.22 (for 21.11
> and later releases):
> $ dpdk-pmdinfo.py /path/to/your/dpdk/drivers/librte_net_bnxt.so.22
>
> You should get a list of supported NetXtreme nics, like:
>
> [snipped some other drivers compiled in my application]
> PMD NAME: net_bnxt
> PMD HW SUPPORT:
>  Broadcom Inc. and subsidiaries (14e4) : BCM5745X NetXtreme-E RDMA
> Virtual Function (1606) (All Subdevices)
>  Broadcom Inc. and subsidiaries (14e4) : BCM5745X NetXtreme-E Ethernet
> Virtual Function (1609) (All Subdevices)
>  Broadcom Inc. and subsidiaries (14e4) : BCM57454 NetXtreme-E
> 10Gb/25Gb/40Gb/50Gb/100Gb Ethernet (1614) (All Subdevices)
>  Broadcom Inc. and subsidiaries (14e4) : NetXtreme-E RDMA Virtual
> Function (16c1) (All Subdevices)
>  Broadcom Inc. and subsidiaries (14e4) : NetXtreme-C Ethernet Virtual
> Function (16cb) (All Subdevices)
> [snipped the rest]
>
> I hope you can find a () corresponding to your NIC.
>
>
> --
> David Marchand
>
>


Re: skeleton code failing

2022-07-12 Thread Lokesh Chakka
hello friends,

Is there anyone else facing the same issue ?
Would like to understand if I am missing something. I am new to this
platform.
rte_eth_dev_count_avail is returning zero.
OS is Ubuntu 22.04. DPDK is latest version.
Cards are being detected by Linux. Ifconfig is showing the cards up. LED is
also glowing.

Thanks & Regards
--
Lokesh Chakka.


On Mon, Jul 11, 2022 at 11:29 AM Lokesh Chakka <
lvenkatakumarcha...@gmail.com> wrote:

> Hello,
>
> I am learning DPDK.
>
> I am using the following network cards that support dpdk.
>
> https://www.broadcom.com/products/ethernet-connectivity/network-adapters/100gb-nic-ocp/p2100g
>
> I am seeing rte_eth_dev_count_avail is failing. code is as follows:
>
> 
> nb_ports = rte_eth_dev_count_avail();
> if( nb_ports < 2 || ( nb_ports & 1 ) )
> rte_exit( EXIT_FAILURE, "Error: %u number of ports must be
> even\n", nb_ports );
> 
>
> I have one card and just because it is for learning purposes, I have
> looped back both the slots of the same cards so that I can send on one slot
> and receive on another slot.
>
> Can someone please help me how to fix the issue.
>
> Device Driver is bnxt_en
> Platform is Ubuntu 22.04
>
> Please let me know if any more information is required.
>
> Thanks & Regards
> --
> Lokesh Chakka.
>


skeleton code failing

2022-07-10 Thread Lokesh Chakka
Hello,

I am learning DPDK.

I am using the following network cards that support dpdk.
https://www.broadcom.com/products/ethernet-connectivity/network-adapters/100gb-nic-ocp/p2100g

I am seeing rte_eth_dev_count_avail is failing. code is as follows:


nb_ports = rte_eth_dev_count_avail();
if( nb_ports < 2 || ( nb_ports & 1 ) )
rte_exit( EXIT_FAILURE, "Error: %u number of ports must be
even\n", nb_ports );


I have one card and just because it is for learning purposes, I have looped
back both the slots of the same cards so that I can send on one slot and
receive on another slot.

Can someone please help me how to fix the issue.

Device Driver is bnxt_en
Platform is Ubuntu 22.04

Please let me know if any more information is required.

Thanks & Regards
--
Lokesh Chakka.