> On Mar 6, 2017, at 11:00 AM, Philip Lee <[email protected]> wrote:
> 
> Hi Keith,
> 
> Do you have any insights into which driver you think may be problematic?
> 
> I haven't really gone anywhere after redoing my original install steps
> for DPDK and Pktgen.
> The only main difference I can think of is that I installed the
> Netronome board support package from a prepackaged .deb file onto this
> system.
> 
> Sorry for all the hassle. I'm completely new to Netronome, DPDK, and Pktgen.

I do not think it is a problem with the Netronome BSP unless it also installed 
the PMD driver for DPDK. I suspect the PMD driver in DPKD is not setting the 
max queues to a value like 1 and Pktgen just happens to check the return code 
from the rte_eth_dev_configure(). I have not used the Netronome card as I do 
not have a NIC for it.

You made me look at the PMD and it is setting the rx/tx queue count so it seems 
ok. One thing you can try is to enable --log-level=9 on the command line for 
DPDK to print out more information and it should print out how many rx_queues 
it has in a log message.

Other then that I think you will need to contact the Netronome PMD driver 
maintainer.

Netronome nfp
M: Alejandro Lucero <[email protected]>
F: drivers/net/nfp/
F: doc/guides/nics/nfp.rst

> 
> Thanks,
> 
> Philip
> 
> On Mon, Mar 6, 2017 at 11:22 AM, Wiles, Keith <[email protected]> wrote:
>> 
>>> On Mar 6, 2017, at 10:17 AM, Philip Lee <[email protected]> wrote:
>>> 
>>> Hi Keith,
>>> 
>>> Also, how do I get a list of devices that DPDK detects to find the ports to 
>>> blacklist?
>>> 
>>> I tried blacklisting just the other virtual functions of the Netronome NIC, 
>>> but the results are the same. I also tried unbinding the igb_uio drivers 
>>> from all but the virtual function I'm using. If I use the whitelist (-w), 
>>> does it force it to look at that pci device only? I tried that and it 
>>> provided the same results as well.
>> 
>> You can do one of two things only bind the ports you want to use or 
>> blacklist all of the ports that are bound that you do not want DPDK to see.
>> 
>> To see all of your ports in the system do ‘lspci | grep Ethernet’
>> 
>> Then you need to figure out how the PCI id maps to the phyiscal port you 
>> want to use. (Not normally a easy task or then read the hardware spec on the 
>> Motherboard or just do some experiments.
>> 
>>> 
>>> 
>>> Also, running pktgen on the working node gives this output with 
>>> max_rx_queues and max_tx_queues having values of 1, so it seems like its a 
>>> problem with the system setup on this broken node.
>>> ** Default Info (5:8.0, if_index:0) **
>>>   max_vfs        :   0, min_rx_bufsize    :  68, max_rx_pktlen :  9216
>>>   max_rx_queues  :   1, max_tx_queues     :   1
>> 
>> I think this is a driver problem, as it should report at least one per.
>>> 
>>> 
>>> Thanks,
>>> 
>>> Phlip Lee
>>> 
>>> 
>>> On Mon, Mar 6, 2017 at 10:14 AM, Wiles, Keith <[email protected]> wrote:
>>> 
>>>> On Mar 5, 2017, at 8:03 PM, Philip Lee <[email protected]> wrote:
>>>> 
>>>> Hello all,
>>>> 
>>>> I had a "working" install of pktgen that would transfer data but not
>>>> provide statistics. The setup are two Netronome NICs connected
>>>> together. It was suggested there was a problem with the Netronome PMD,
>>>> so I reinstalled both the Netronome BSP and DPDK. Now I'm getting the
>>>> following error with trying to start up pktgen with: ./pktgen -c 0x1f
>>>> -n 1 -- -m [1:2].0
>>>> 
>>>>>>> Packet Burst 32, RX Desc 512, TX Desc 1024, mbufs/port 8192, mbuf cache 
>>>>>>> 1024
>>>> === port to lcore mapping table (# lcores 5) ===
>>>>  lcore:     0     1     2     3     4
>>>> port   0:  D: T  1: 0  0: 1  0: 0  0: 0 =  1: 1
>>>> Total   :  0: 0  1: 0  0: 1  0: 0  0: 0
>>>>   Display and Timer on lcore 0, rx:tx counts per port/lcore
>>>> 
>>>> Configuring 4 ports, MBUF Size 1920, MBUF Cache Size 1024
>>>> Lcore:
>>>>   1, RX-Only
>>>>               RX( 1): ( 0: 0)
>>>>   2, TX-Only
>>>>               TX( 1): ( 0: 0)
>>>> Port :
>>>>   0, nb_lcores  2, private 0x8cca90, lcores:  1  2
>>>> 
>>>> ** Default Info (5:8.0, if_index:0) **
>>>>  max_vfs        :   0, min_rx_bufsize    :  68, max_rx_pktlen :     0
>>>>  max_rx_queues  :   0, max_tx_queues     :   0
>>>>  max_mac_addrs  :   1, max_hash_mac_addrs:   0, max_vmdq_pools:     0
>>>>  rx_offload_capa:   0, tx_offload_capa   :   0, reta_size     :
>>>> 128, flow_type_rss_offloads:0000000000000000
>>>>  vmdq_queue_base:   0, vmdq_queue_num    :   0, vmdq_pool_base:     0
>>>> ** RX Conf **
>>>>  pthresh        :   8, hthresh          :   8, wthresh        :     0
>>>>  Free Thresh    :  32, Drop Enable      :   0, Deferred Start :     0
>>>> ** TX Conf **
>>>>  pthresh        :  32, hthresh          :   0, wthresh        :     0
>>>>  Free Thresh    :  32, RS Thresh        :  32, Deferred Start :
>>>> 0, TXQ Flags:00000f01
>>>> 
>>>> !PANIC!: Cannot configure device: port=0, Num queues 1,1 (2)Invalid 
>>>> argument
>>>> PANIC in pktgen_config_ports():
>>>> Cannot configure device: port=0, Num queues 1,1 (2)Invalid argument6:
>>>> [./pktgen() [0x43394e]]
>>>> 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) 
>>>> [0x7f89dd0f7f45]]
>>>> 4: [./pktgen(main+0x4d4) [0x432f54]]
>>>> 3: [./pktgen(pktgen_config_ports+0x3108) [0x45f418]]
>>>> 2: [./pktgen(__rte_panic+0xbe) [0x42f288]]
>>>> 1: [./pktgen(rte_dump_stack+0x1a) [0x49af3a]]
>>>> Aborted
>>>> 
>>>> ------------------------------------------------------------------------------------------------------------------------
>>>> 
>>>> I tried unbinding the nics and rebinding. I read in an older mailling
>>>> post that setup.sh needs to be run every reboot. I executed it, and it
>>>> looks like a list of things to install pktgen that I had done manually
>>>> again after the most recent reboot. The output of the status check
>>>> script is below:
>>>> ./dpdk-devbind.py --status
>>>> 
>>>> Network devices using DPDK-compatible driver
>>>> ============================================
>>>> 0000:05:08.0 'Device 6003' drv=igb_uio unused=
>>>> 0000:05:08.1 'Device 6003' drv=igb_uio unused=
>>>> 0000:05:08.2 'Device 6003' drv=igb_uio unused=
>>>> 0000:05:08.3 'Device 6003' drv=igb_uio unused=
>>>> 
>>>> Network devices using kernel driver
>>>> ===================================
>>>> 0000:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth0 drv=tg3
>>>> unused=igb_uio *Active*
>>>> 0000:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth1 drv=tg3
>>>> unused=igb_uio
>>>> 0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth2 drv=tg3
>>>> unused=igb_uio
>>>> 0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=eth3 drv=tg3
>>>> unused=igb_uio
>>>> 0000:05:00.0 'Device 4000' if= drv=nfp unused=igb_uio
>>>> 0000:43:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=eth4
>>>> drv=ixgbe unused=igb_uio
>>>> 0000:43:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=eth7
>>>> drv=ixgbe unused=igb_uio
>>>> 0000:44:00.0 'MT27500 Family [ConnectX-3]' if=eth5,eth6 drv=mlx4_core
>>>> unused=igb_uio
>>>> 
>>>> Does anyone have any suggestions?
>>> 
>>> Try blacklisting (-b 0000:01:00.1 -b ...) all of the ports you are not 
>>> using. The number of ports being setup is taken from the number of devices 
>>> DPDK detects.
>>> 
>>> The only on thing I am worried about is the ' max_rx_queues  :   0, 
>>> max_tx_queues     :   0’ is reporting zero queues. It maybe other example 
>>> code does not test the return code from the rte_eth_dev_configure() call. I 
>>> think the max_rx_queues and max_tx_queues should be at least 1.
>>> 
>>>> 
>>>> Thanks,
>>>> 
>>>> Philip Lee
>>> 
>>> Regards,
>>> Keith
>>> 
>>> 
>> 
>> Regards,
>> Keith
>> 

Regards,
Keith

Reply via email to