https://bugs.linaro.org/show_bug.cgi?id=3657
Bug ID: 3657 Summary: PktIO does not work with Mellanox Interfaces Product: OpenDataPlane - linux- generic reference Version: master Hardware: All OS: Linux Status: UNCONFIRMED Severity: major Priority: High Component: Packet IO Assignee: maxim.uva...@linaro.org Reporter: bill.fischo...@linaro.org CC: lng-odp@lists.linaro.org Target Milestone: --- Submitted by P Gyanesh Kumar Patra Hi, I am coming back with the same issue here. As the odp-dpdk is updated to the recent DPDK version inline with the ODP code base, now both ODP repo and odp-dpdk repos stops working with mellanox interfaces. Please let me know if this is a known issue. I tried to run with master branch, caterpillar branch with and without 'abi-compat'. The ODP and odp-dpdk code compilation process is kept equal. The dpdk is compiled with mlx flags and working properly. Here is the configuratin details and the errors i get while trying to run ./test/performance/odp_l2fwd use case with odp github code base (master branch and has the v1.18.0.0 tag). <<system details>> ubuntu@ubuntu:~# uname -a Linux ubuntu 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux ubuntu@ubuntu:/home/gyanesh/dpdk-17.11# ./usertools/dpdk-devbind.py -s Network devices using DPDK-compatible driver ============================================ <none> Network devices using kernel driver =================================== 0000:81:00.0 'MT27700 Family [ConnectX-4] 1013' if=enp129s0f0 drv=mlx5_core unused= 0000:81:00.1 'MT27700 Family [ConnectX-4] 1013' if=enp129s0f1 drv=mlx5_core unused= <<ODP version and compilation>> ubuntu@ubuntu:/home/gyanesh/odp# git branch -a caterpillar * master remotes/origin/HEAD -> origin/master remotes/origin/api-next remotes/origin/caterpillar ubuntu@ubuntu:/home/gyanesh/odp# ./bootstrap ubuntu@ubuntu:/home/gyanesh/odp# ./configure --prefix=/home/gyanesh/odp/build --with-dpdk-path=/home/gyanesh/dpdk-17.11/x86_64-native-linuxapp-gcc/ --enable-dpdk-zero-copy --disable-abi-compat --enable-debug-print --enable-debug --enable-helper-linux --disable-doxygen-doc --enable-helper-debug-print --enable-test-example --enable-test-helper --enable-test-perf --enable-test-vald LDFLAGS="-libverbs -lmlx5" ubuntu@ubuntu:/home/gyanesh/odp# make ubuntu@ubuntu:/home/gyanesh/odp# cd test/performance << performance testcase >> ubuntu@ubuntu:/home/gyanesh/odp/test/performance# export ODP_PLATFORM_PARAMS="-n 4 -m 15240" ubuntu@ubuntu:/home/gyanesh/odp/test/performance# ./odp_l2fwd -i 0,1 -c 2 HW time counter freq: 2400000786 hz odp_system_info.c:100:default_huge_page_size():defaut hp size is 1048576 kB odp_system_info.c:100:default_huge_page_size():defaut hp size is 1048576 kB _ishm.c:1468:_odp_ishm_init_global():ishm: using dir /dev/shm _ishm.c:1484:_odp_ishm_init_global():Huge pages mount point is: /dev/hugepages _ishmphy.c:65:_odp_ishmphy_book_va():VA Reserved: 0x7f07dd70b000, len=0x60000000 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=0, fd=3 _fdserver.c:468:handle_request():storing {ctx=1, key=0}->fd=5 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=1, fd=4 _fdserver.c:468:handle_request():storing {ctx=1, key=1}->fd=6 odp_pool.c:108:odp_pool_init_global(): Pool init global odp_pool.c:109:odp_pool_init_global(): odp_buffer_hdr_t size 256 odp_pool.c:110:odp_pool_init_global(): odp_packet_hdr_t size 576 odp_pool.c:111:odp_pool_init_global(): odp_queue_basic.c:78:queue_init_global():Queue init ... _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=2, fd=5 _fdserver.c:468:handle_request():storing {ctx=1, key=2}->fd=7 odp_queue_lf.c:272:queue_lf_init_global(): Lock-free queue init odp_queue_lf.c:273:queue_lf_init_global(): u128 lock-free: 1 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=3, fd=6 _fdserver.c:468:handle_request():storing {ctx=1, key=3}->fd=8 odp_queue_basic.c:103:queue_init_global():done odp_queue_basic.c:104:queue_init_global():Queue init global odp_queue_basic.c:106:queue_init_global(): struct queue_entry_s size 256 odp_queue_basic.c:108:queue_init_global(): queue_entry_t size 256 odp_queue_basic.c:109:queue_init_global(): Using scheduler 'basic' odp_schedule_basic.c:305:schedule_init_global():Schedule init ... _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=4, fd=7 _fdserver.c:468:handle_request():storing {ctx=1, key=4}->fd=9 odp_schedule_basic.c:365:schedule_init_global():done _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=5, fd=8 _fdserver.c:468:handle_request():storing {ctx=1, key=5}->fd=10 PKTIO: initialized loop interface. PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to disable. PKTIO: initialized pcap interface. PKTIO: initialized ipc interface. PKTIO: initialized null interface. PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=1 to disable. PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=1 to disable. odp_timer.c:1253:odp_timer_init_global():Using lock-less timer implementation _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=6, fd=9 _fdserver.c:468:handle_request():storing {ctx=1, key=6}->fd=11 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=7, fd=10 _fdserver.c:468:handle_request():storing {ctx=1, key=7}->fd=12 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=8, fd=11 _fdserver.c:468:handle_request():storing {ctx=1, key=8}->fd=13 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=9, fd=12 _fdserver.c:468:handle_request():storing {ctx=1, key=9}->fd=14 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=10, fd=13 _fdserver.c:468:handle_request():storing {ctx=1, key=10}->fd=15 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=11, fd=14 _fdserver.c:468:handle_request():storing {ctx=1, key=11}->fd=16 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=12, fd=15 _fdserver.c:468:handle_request():storing {ctx=1, key=12}->fd=17 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=13, fd=16 _fdserver.c:468:handle_request():storing {ctx=1, key=13}->fd=18 ODP system info --------------- ODP API version: 1.18.0 ODP impl name: "odp-linux" CPU model: Intel(R) Xeon(R) CPU E5-2680 v4 CPU freq (hz): 3300000000 Cache line size: 64 CPU count: 56 CPU features supported: SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 FMA CMPXCHG16B XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC MOVBE POPCNT TSC_DEADLINE AES XSAVE OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE DIGTEMP TRBOBST ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE HLE AVX2 BMI2 ERMS INVPCID RTM LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP EM64T INVTSC CPU features NOT supported: CNXT_ID PSN ACNT2 BMI1 SMEP AVX512F LZCNT Running ODP appl: "odp_l2fwd" ----------------- IF-count: 2 Using IFs: 0 1 Mode: PKTIN_DIRECT, PKTOUT_DIRECT num worker threads: 2 first CPU: 54 cpu mask: 0xC0000000000000 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=14, fd=17 _fdserver.c:468:handle_request():storing {ctx=1, key=14}->fd=19 _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825 key=15, fd=18 _fdserver.c:468:handle_request():storing {ctx=1, key=15}->fd=20 Pool info --------- pool 1 name packet pool pool type packet pool shm 16 user area shm 0 num 16384 align 64 headroom 128 seg len 8064 max data len 65536 tailroom 0 block size 8832 uarea size 0 shm size 145321728 base addr 0x7f0440000000 uarea shm size 0 uarea base addr (nil) pktio/dpdk.c:1145:dpdk_pktio_init():arg[0]: odpdpdk pktio/dpdk.c:1145:dpdk_pktio_init():arg[1]: -c pktio/dpdk.c:1145:dpdk_pktio_init():arg[2]: 0x1 pktio/dpdk.c:1145:dpdk_pktio_init():arg[3]: -m pktio/dpdk.c:1145:dpdk_pktio_init():arg[4]: 512 EAL: Detected 56 lcore(s) EAL: Probing VFIO support... EAL: PCI device 0000:05:00.0 on NUMA socket 0 EAL: probe driver: 8086:1528 net_ixgbe EAL: PCI device 0000:05:00.1 on NUMA socket 0 EAL: probe driver: 8086:1528 net_ixgbe EAL: PCI device 0000:81:00.0 on NUMA socket 1 EAL: probe driver: 15b3:1013 net_mlx5 PMD: net_mlx5: PCI information matches, using device "mlx5_0" (SR-IOV: false) PMD: net_mlx5: 1 port(s) detected PMD: net_mlx5: MPS is disabled PMD: net_mlx5: port 1 MAC address is 7c:fe:90:31:0d:3a EAL: PCI device 0000:81:00.1 on NUMA socket 1 EAL: probe driver: 15b3:1013 net_mlx5 PMD: net_mlx5: PCI information matches, using device "mlx5_1" (SR-IOV: false) PMD: net_mlx5: 1 port(s) detected PMD: net_mlx5: MPS is disabled PMD: net_mlx5: port 1 MAC address is 7c:fe:90:31:0d:3b pktio/dpdk.c:1156:dpdk_pktio_init():rte_eal_init OK DPDK interface (net_mlx5): 0 num_rx_desc: 128 num_tx_desc: 512 rx_drop_en: 0 odp_packet_io.c:240:setup_pktio_entry():0 uses dpdk created pktio 1, dev: 0, drv: dpdk created 1 input and 1 output queues on (0) DPDK interface (net_mlx5): 1 num_rx_desc: 128 num_tx_desc: 512 rx_drop_en: 0 odp_packet_io.c:240:setup_pktio_entry():1 uses dpdk created pktio 2, dev: 1, drv: dpdk created 1 input and 1 output queues on (1) Queue binding (indexes) ----------------------- worker 0 rx: pktio 0, queue 0 tx: pktio 1, queue 0 worker 1 rx: pktio 1, queue 0 tx: pktio 0, queue 0 Port config -------------------- Port 0 (0) rx workers 1 tx workers 1 rx queues 1 tx queues 1 Port 1 (1) rx workers 1 tx workers 1 rx queues 1 tx queues 1 threads.c:54:_odph_thread_run_start_routine():helper: ODP worker thread started as linux pthread. (pid=43825) [01] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT PMD: net_mlx5: 0xb51c80: unable to allocate queue index 0 threads.c:54:_odph_thread_run_start_routine():helper: ODP worker thread started as linux pthread. (pid=43825) [02] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT pktio/dpdk.c:1538:dpdk_start():Queue setup failed: err=-12, port=0 odp_l2fwd.c:1671:main():Error: unable to start 0 ubuntu@ubuntu:/home/gyanesh/odp/test/performance# If any other logs or details are required, then i would surely provide here to resolve this issue. Thanks P Gyanesh Kumar Patra On Fri, Nov 10, 2017 at 6:13 AM, gyanesh patra <pgyanesh.pa...@gmail.com> wrote: -- You are receiving this mail because: You are on the CC list for the bug.