Thanks. I've opened Bug https://bugs.linaro.org/show_bug.cgi?id=3657
to track this.

On Tue, Mar 6, 2018 at 1:36 PM, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> Hi,
> I am coming back with the same issue here. As the odp-dpdk is updated to
> the recent DPDK version inline with the ODP code base, now both ODP repo
> and odp-dpdk repos stops working with mellanox interfaces. Please let me
> know if this is a known issue. I tried to run with master branch,
> caterpillar branch with and without 'abi-compat'. The ODP and odp-dpdk code
> compilation process is kept equal. The dpdk is compiled with mlx flags and
> working properly.
>
> Here is the configuratin details and the errors i get while trying to run
> ./test/performance/odp_l2fwd use case with odp github code base (master
> branch and has the v1.18.0.0 tag).
>
> <<system details>>
> ubuntu@ubuntu:~# uname -a
> Linux ubuntu 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018
> x86_64 x86_64 x86_64 GNU/Linux
>
> ubuntu@ubuntu:/home/gyanesh/dpdk-17.11# ./usertools/dpdk-devbind.py -s
>
> Network devices using DPDK-compatible driver
> ============================================
> <none>
>
> Network devices using kernel driver
> ===================================
> 0000:81:00.0 'MT27700 Family [ConnectX-4] 1013' if=enp129s0f0 drv=mlx5_core
> unused=
> 0000:81:00.1 'MT27700 Family [ConnectX-4] 1013' if=enp129s0f1 drv=mlx5_core
> unused=
>
> <<ODP version and compilation>>
> ubuntu@ubuntu:/home/gyanesh/odp# git branch -a
>   caterpillar
> * master
>   remotes/origin/HEAD -> origin/master
>   remotes/origin/api-next
>   remotes/origin/caterpillar
>
> ubuntu@ubuntu:/home/gyanesh/odp# ./bootstrap
> ubuntu@ubuntu:/home/gyanesh/odp# ./configure
> --prefix=/home/gyanesh/odp/build
> --with-dpdk-path=/home/gyanesh/dpdk-17.11/x86_64-native-linuxapp-gcc/
> --enable-dpdk-zero-copy --disable-abi-compat --enable-debug-print
> --enable-debug --enable-helper-linux --disable-doxygen-doc
> --enable-helper-debug-print --enable-test-example --enable-test-helper
> --enable-test-perf --enable-test-vald LDFLAGS="-libverbs -lmlx5"
>
> ubuntu@ubuntu:/home/gyanesh/odp# make
> ubuntu@ubuntu:/home/gyanesh/odp# cd test/performance
>
> << performance testcase >>
> ubuntu@ubuntu:/home/gyanesh/odp/test/performance# export
> ODP_PLATFORM_PARAMS="-n 4 -m 15240"
> ubuntu@ubuntu:/home/gyanesh/odp/test/performance# ./odp_l2fwd -i 0,1 -c 2
> HW time counter freq: 2400000786 hz
>
> odp_system_info.c:100:default_huge_page_size():defaut hp size is 1048576 kB
> odp_system_info.c:100:default_huge_page_size():defaut hp size is 1048576 kB
> _ishm.c:1468:_odp_ishm_init_global():ishm: using dir /dev/shm
> _ishm.c:1484:_odp_ishm_init_global():Huge pages mount point is:
> /dev/hugepages
> _ishmphy.c:65:_odp_ishmphy_book_va():VA Reserved: 0x7f07dd70b000,
> len=0x60000000
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=0, fd=3
> _fdserver.c:468:handle_request():storing {ctx=1, key=0}->fd=5
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=1, fd=4
> _fdserver.c:468:handle_request():storing {ctx=1, key=1}->fd=6
> odp_pool.c:108:odp_pool_init_global():
> Pool init global
> odp_pool.c:109:odp_pool_init_global():  odp_buffer_hdr_t size 256
> odp_pool.c:110:odp_pool_init_global():  odp_packet_hdr_t size 576
> odp_pool.c:111:odp_pool_init_global():
> odp_queue_basic.c:78:queue_init_global():Queue init ...
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=2, fd=5
> _fdserver.c:468:handle_request():storing {ctx=1, key=2}->fd=7
> odp_queue_lf.c:272:queue_lf_init_global():
> Lock-free queue init
> odp_queue_lf.c:273:queue_lf_init_global():  u128 lock-free: 1
>
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=3, fd=6
> _fdserver.c:468:handle_request():storing {ctx=1, key=3}->fd=8
> odp_queue_basic.c:103:queue_init_global():done
> odp_queue_basic.c:104:queue_init_global():Queue init global
> odp_queue_basic.c:106:queue_init_global():  struct queue_entry_s size 256
> odp_queue_basic.c:108:queue_init_global():  queue_entry_t size        256
> odp_queue_basic.c:109:queue_init_global():
> Using scheduler 'basic'
> odp_schedule_basic.c:305:schedule_init_global():Schedule init ...
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=4, fd=7
> _fdserver.c:468:handle_request():storing {ctx=1, key=4}->fd=9
> odp_schedule_basic.c:365:schedule_init_global():done
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=5, fd=8
> _fdserver.c:468:handle_request():storing {ctx=1, key=5}->fd=10
> PKTIO: initialized loop interface.
> PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to
> disable.
> PKTIO: initialized pcap interface.
> PKTIO: initialized ipc interface.
> PKTIO: initialized null interface.
> PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=1
> to disable.
> PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=1
> to disable.
> odp_timer.c:1253:odp_timer_init_global():Using lock-less timer
> implementation
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=6, fd=9
> _fdserver.c:468:handle_request():storing {ctx=1, key=6}->fd=11
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=7, fd=10
> _fdserver.c:468:handle_request():storing {ctx=1, key=7}->fd=12
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=8, fd=11
> _fdserver.c:468:handle_request():storing {ctx=1, key=8}->fd=13
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=9, fd=12
> _fdserver.c:468:handle_request():storing {ctx=1, key=9}->fd=14
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=10, fd=13
> _fdserver.c:468:handle_request():storing {ctx=1, key=10}->fd=15
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=11, fd=14
> _fdserver.c:468:handle_request():storing {ctx=1, key=11}->fd=16
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=12, fd=15
> _fdserver.c:468:handle_request():storing {ctx=1, key=12}->fd=17
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=13, fd=16
> _fdserver.c:468:handle_request():storing {ctx=1, key=13}->fd=18
>
> ODP system info
> ---------------
> ODP API version: 1.18.0
> ODP impl name:   "odp-linux"
> CPU model:       Intel(R) Xeon(R) CPU E5-2680 v4
> CPU freq (hz):   3300000000
> Cache line size: 64
> CPU count:       56
>
>
> CPU features supported:
> SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 FMA CMPXCHG16B
> XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC MOVBE POPCNT TSC_DEADLINE AES XSAVE
> OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR
> PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE
> DIGTEMP TRBOBST ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE HLE
> AVX2 BMI2 ERMS INVPCID RTM LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP EM64T INVTSC
>
> CPU features NOT supported:
> CNXT_ID PSN ACNT2 BMI1 SMEP AVX512F LZCNT
>
> Running ODP appl: "odp_l2fwd"
> -----------------
> IF-count:        2
> Using IFs:       0 1
> Mode:            PKTIN_DIRECT, PKTOUT_DIRECT
>
> num worker threads: 2
> first CPU:          54
> cpu mask:           0xC0000000000000
>
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=14, fd=17
> _fdserver.c:468:handle_request():storing {ctx=1, key=14}->fd=19
> _fdserver.c:284:_odp_fdserver_register_fd():FD client register: pid=43825
> key=15, fd=18
> _fdserver.c:468:handle_request():storing {ctx=1, key=15}->fd=20
>
> Pool info
> ---------
>   pool            1
>   name            packet pool
>   pool type       packet
>   pool shm        16
>   user area shm   0
>   num             16384
>   align           64
>   headroom        128
>   seg len         8064
>   max data len    65536
>   tailroom        0
>   block size      8832
>   uarea size      0
>   shm size        145321728
>   base addr       0x7f0440000000
>   uarea shm size  0
>   uarea base addr (nil)
>
> pktio/dpdk.c:1145:dpdk_pktio_init():arg[0]: odpdpdk
> pktio/dpdk.c:1145:dpdk_pktio_init():arg[1]: -c
> pktio/dpdk.c:1145:dpdk_pktio_init():arg[2]: 0x1
> pktio/dpdk.c:1145:dpdk_pktio_init():arg[3]: -m
> pktio/dpdk.c:1145:dpdk_pktio_init():arg[4]: 512
> EAL: Detected 56 lcore(s)
> EAL: Probing VFIO support...
> EAL: PCI device 0000:05:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1528 net_ixgbe
> EAL: PCI device 0000:05:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1528 net_ixgbe
> EAL: PCI device 0000:81:00.0 on NUMA socket 1
> EAL:   probe driver: 15b3:1013 net_mlx5
> PMD: net_mlx5: PCI information matches, using device "mlx5_0" (SR-IOV:
> false)
> PMD: net_mlx5: 1 port(s) detected
> PMD: net_mlx5: MPS is disabled
> PMD: net_mlx5: port 1 MAC address is 7c:fe:90:31:0d:3a
> EAL: PCI device 0000:81:00.1 on NUMA socket 1
> EAL:   probe driver: 15b3:1013 net_mlx5
> PMD: net_mlx5: PCI information matches, using device "mlx5_1" (SR-IOV:
> false)
> PMD: net_mlx5: 1 port(s) detected
> PMD: net_mlx5: MPS is disabled
> PMD: net_mlx5: port 1 MAC address is 7c:fe:90:31:0d:3b
> pktio/dpdk.c:1156:dpdk_pktio_init():rte_eal_init OK
> DPDK interface (net_mlx5): 0
>   num_rx_desc: 128
>   num_tx_desc: 512
>   rx_drop_en: 0
> odp_packet_io.c:240:setup_pktio_entry():0 uses dpdk
> created pktio 1, dev: 0, drv: dpdk
> created 1 input and 1 output queues on (0)
> DPDK interface (net_mlx5): 1
>   num_rx_desc: 128
>   num_tx_desc: 512
>   rx_drop_en: 0
> odp_packet_io.c:240:setup_pktio_entry():1 uses dpdk
> created pktio 2, dev: 1, drv: dpdk
> created 1 input and 1 output queues on (1)
>
> Queue binding (indexes)
> -----------------------
> worker 0
>   rx: pktio 0, queue 0
>   tx: pktio 1, queue 0
> worker 1
>   rx: pktio 1, queue 0
>   tx: pktio 0, queue 0
>
>
> Port config
> --------------------
> Port 0 (0)
>   rx workers 1
>   tx workers 1
>   rx queues 1
>   tx queues 1
> Port 1 (1)
>   rx workers 1
>   tx workers 1
>   rx queues 1
>   tx queues 1
>
> threads.c:54:_odph_thread_run_start_routine():helper: ODP worker thread
> started as linux pthread. (pid=43825)
> [01] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> PMD: net_mlx5: 0xb51c80: unable to allocate queue index 0
> threads.c:54:_odph_thread_run_start_routine():helper: ODP worker thread
> started as linux pthread. (pid=43825)
> [02] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> pktio/dpdk.c:1538:dpdk_start():Queue setup failed: err=-12, port=0
> odp_l2fwd.c:1671:main():Error: unable to start 0
> ubuntu@ubuntu:/home/gyanesh/odp/test/performance#
>
>
> If any other logs or details are required, then i would surely provide here
> to resolve this issue.
>
> Thanks
>
> P Gyanesh Kumar Patra
>
> On Fri, Nov 10, 2017 at 6:13 AM, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
>
>> I was trying without the dpdk and it was not working properly. I
>> guess i have to compile ODP with DPDK support to work with mellanox.
>> Thank you for the details.
>>
>> P Gyanesh Kumar Patra
>>
>> On Thu, Nov 9, 2017 at 12:47 PM, Elo, Matias (Nokia - FI/Espoo) <
>> matias....@nokia.com> wrote:
>>
>>> Hi Gyanesh,
>>>
>>> Pretty much the same steps should also work with odp linux-generic. The
>>> main difference is configure script. With linux-generic you use
>>> '--with-dpdk-path=<dpdk_path>' option and optionally
>>> --enable-dpdk-zero-copy flag. The supported dpdk  version is v17.08.
>>>
>>> -Matias
>>>
>>> > On 9 Nov 2017, at 10:34, gyanesh patra <pgyanesh.pa...@gmail.com>
>>> wrote:
>>> >
>>> > Hi Maxim,
>>> > Thanks for the help. I managed to figure out the configuration error
>>> and it
>>> > works fine for "ODP-DPDK". The MLX5 pmd was not included properly.
>>> >
>>> > But regarding "ODP" repo (not odp-dpdk), do i need to follow any steps
>>> to
>>> > be able to use MLX ???
>>> >
>>> >
>>> > P Gyanesh Kumar Patra
>>> >
>>> > On Wed, Nov 8, 2017 at 7:56 PM, Maxim Uvarov <maxim.uva...@linaro.org>
>>> > wrote:
>>> >
>>> >> On 11/08/17 19:32, gyanesh patra wrote:
>>> >>> I am not sure what you mean. Can you please elaborate?
>>> >>>
>>> >>> As i mentioned before I am able to run dpdk examples. Hence the
>>> drivers
>>> >>> are available and working fine.
>>> >>> I configured ODP & ODP-DPDK with "LDFLAGS=-libverbs" and compiled to
>>> >>> work with mellanox. I followed the same while compiling dpdk too.
>>> >>>
>>> >>> Is there anything i am missing?
>>> >>>
>>> >>> P Gyanesh Kumar Patra
>>> >>
>>> >>
>>> >> in general if CONFIG_RTE_LIBRTE_MLX5_PMD=y was specified then it has to
>>> >> work. I think we did test only with ixgbe. But in general it's common
>>> code.
>>> >>
>>> >> "Unable to init any I/O type." means it it called all open for all
>>> pktio
>>> >> in list here:
>>> >> ./platform/linux-generic/pktio/io_ops.c
>>> >>
>>> >> and setup_pkt_dpdk() failed for some reason.
>>> >>
>>> >> I do not like allocations errors in your log.
>>> >>
>>> >> Try to compile ODP with --enable-debug-print --enable-debug it will
>>> make
>>> >> ODP_DBG() macro work and it will be visible why it does not opens
>>> pktio.
>>> >>
>>> >> Maxim
>>> >>
>>> >>
>>> >>>
>>> >>> On Wed, Nov 8, 2017 at 5:22 PM, Maxim Uvarov <maxim.uva...@linaro.org
>>> >>> <mailto:maxim.uva...@linaro.org>> wrote:
>>> >>>
>>> >>>    is Mellanox pmd compiled in?
>>> >>>
>>> >>>    Maxim.
>>> >>>
>>> >>>    On 11/08/17 17:58, gyanesh patra wrote:
>>> >>>> Hi,
>>> >>>> I am trying to run ODP & ODP-DPDK examples on our server with
>>> >>>    mellanox 100G
>>> >>>> NICs. I am using the odp_l2fwd example. While running the example,
>>> >>>    I am
>>> >>>> facing some issues.
>>> >>>> -> When I run "ODP" example using the if names given by kernel as
>>> >>>> arguments, I am not getting enough throughput.(the value is very
>>> >> low)
>>> >>>> -> And when I try "ODP-DPDK" example using port ID as "0,1", it
>>> >> can't
>>> >>>> create pktio. Whereas I am able to run the examples from "DPDK"
>>> >>>> repo with portID "0,1" for the same mellanox NICs. I tried running
>>> >>>    with
>>> >>>> "81:00.0,81:00.1" and also with if-names too without any success.
>>> >>>    Adding
>>> >>>> the whitelist using ODP_PLATFORM_PARAMS doesn't help either.
>>> >>>>
>>> >>>> Am I missing any steps to use mellanox NICs? OR is there a
>>> >>>    different method
>>> >>>> to specify the device details to create pktio?
>>> >>>> I am providing the output of "odp_l2fwd" examples for ODP and
>>> >> ODP-DPDK
>>> >>>> repository here.
>>> >>>>
>>> >>>> The NICs being used:
>>> >>>>
>>> >>>> 0000:81:00.0 'MT27700 Family [ConnectX-4]' if=enp129s0f0
>>> >> drv=mlx5_core
>>> >>>> unused=
>>> >>>> 0000:81:00.1 'MT27700 Family [ConnectX-4]' if=enp129s0f1
>>> >> drv=mlx5_core
>>> >>>> unused=
>>> >>>>
>>> >>>> ODP l2fwd example run details:
>>> >>>> ------------------------------
>>> >>>> root@ubuntu:/home/ubuntu/odp/test/performance# ./odp_l2fwd -i
>>> >>>> enp129s0f0,enp129s0f1
>>> >>>> HW time counter freq: 2399999886 <tel:2399999886>
>>> >>>    <(239)%20999-9886> hz
>>> >>>>
>>> >>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>>> >> memory
>>> >>>> _ishm.c:880:_odp_ishm_reserve():No huge pages, fall back to normal
>>> >>>    pages.
>>> >>>> check: /proc/sys/vm/nr_hugepages.
>>> >>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>>> >> memory
>>> >>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>>> >> memory
>>> >>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>>> >> memory
>>> >>>> PKTIO: initialized loop interface.
>>> >>>> PKTIO: initialized pcap interface.
>>> >>>> PKTIO: initialized ipc interface.
>>> >>>> PKTIO: initialized socket mmap, use export
>>> >>>    ODP_PKTIO_DISABLE_SOCKET_MMAP=1
>>> >>>> to disable.
>>> >>>> PKTIO: initialized socket mmsg,use export
>>> >>>    ODP_PKTIO_DISABLE_SOCKET_MMSG=1
>>> >>>> to disable.
>>> >>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>>> >> memory
>>> >>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>>> >> memory
>>> >>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>>> >> memory
>>> >>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>>> >> memory
>>> >>>>
>>> >>>> ODP system info
>>> >>>> ---------------
>>> >>>> ODP API version: 1.15.0
>>> >>>> ODP impl name:   "odp-linux"
>>> >>>> CPU model:       Intel(R) Xeon(R) CPU E5-2680 v4
>>> >>>> CPU freq (hz):   3300000000
>>> >>>> Cache line size: 64
>>> >>>> CPU count:       56
>>> >>>>
>>> >>>>
>>> >>>> CPU features supported:
>>> >>>> SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 FMA
>>> >>>    CMPXCHG16B
>>> >>>> XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC MOVBE POPCNT TSC_DEADLINE
>>> >>>    AES XSAVE
>>> >>>> OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC
>>> >>>    SEP MTRR
>>> >>>> PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM
>>> >> PBE
>>> >>>> DIGTEMP TRBOBST ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF
>>> >>>    FSGSBASE HLE
>>> >>>> AVX2 BMI2 ERMS INVPCID RTM LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP
>>> >>>    EM64T INVTSC
>>> >>>>
>>> >>>> CPU features NOT supported:
>>> >>>> CNXT_ID PSN ACNT2 BMI1 SMEP AVX512F LZCNT
>>> >>>>
>>> >>>> Running ODP appl: "odp_l2fwd"
>>> >>>> -----------------
>>> >>>> IF-count:        2
>>> >>>> Using IFs:       enp129s0f0 enp129s0f1
>>> >>>> Mode:            PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>>
>>> >>>> num worker threads: 32
>>> >>>> first CPU:          24
>>> >>>> cpu mask:           0xFFFFFFFF000000
>>> >>>>
>>> >>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>>> >> memory
>>> >>>> _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>>> >> memory
>>> >>>>
>>> >>>> Pool info
>>> >>>> ---------
>>> >>>>  pool            0
>>> >>>>  name            packet pool
>>> >>>>  pool type       packet
>>> >>>>  pool shm        11
>>> >>>>  user area shm   0
>>> >>>>  num             8192
>>> >>>>  align           64
>>> >>>>  headroom        128
>>> >>>>  seg len         8064
>>> >>>>  max data len    65536
>>> >>>>  tailroom        0
>>> >>>>  block size      8768
>>> >>>>  uarea size      0
>>> >>>>  shm size        72143104
>>> >>>>  base addr       0x7f5fc1234000
>>> >>>>  uarea shm size  0
>>> >>>>  uarea base addr (nil)
>>> >>>>
>>> >>>> pktio/socket_mmap.c:401:mmap_setup_ring():setsockopt(pkt mmap):
>>> >>>    Invalid
>>> >>>> argument
>>> >>>> pktio/socket_mmap.c:496:sock_mmap_close():mmap_unmap_sock()
>>> >>>    Invalid argument
>>> >>>> created pktio 1, dev: enp129s0f0, drv: socket
>>> >>>> Sharing 1 input queues between 16 workers
>>> >>>> Sharing 1 output queues between 16 workers
>>> >>>> created 1 input and 1 output queues on (enp129s0f0)
>>> >>>> pktio/socket_mmap.c:401:mmap_setup_ring():setsockopt(pkt mmap):
>>> >>>    Invalid
>>> >>>> argument
>>> >>>> pktio/socket_mmap.c:496:sock_mmap_close():mmap_unmap_sock()
>>> >>>    Invalid argument
>>> >>>> created pktio 2, dev: enp129s0f1, drv: socket
>>> >>>> Sharing 1 input queues between 16 workers
>>> >>>> Sharing 1 output queues between 16 workers
>>> >>>> created 1 input and 1 output queues on (enp129s0f1)
>>> >>>>
>>> >>>> Queue binding (indexes)
>>> >>>> -----------------------
>>> >>>> worker 0
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 1
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 2
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 3
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 4
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 5
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 6
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 7
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 8
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 9
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 10
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 11
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 12
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 13
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 14
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 15
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 16
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 17
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 18
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 19
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 20
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 21
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 22
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 23
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 24
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 25
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 26
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 27
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 28
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 29
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>> worker 30
>>> >>>>  rx: pktio 0, queue 0
>>> >>>>  tx: pktio 1, queue 0
>>> >>>> worker 31
>>> >>>>  rx: pktio 1, queue 0
>>> >>>>  tx: pktio 0, queue 0
>>> >>>>
>>> >>>>
>>> >>>> Port config
>>> >>>> --------------------
>>> >>>> Port 0 (enp129s0f0)
>>> >>>>  rx workers 16
>>> >>>>  tx workers 16
>>> >>>>  rx queues 1
>>> >>>>  tx queues 1
>>> >>>> Port 1 (enp129s0f1)
>>> >>>>  rx workers 16
>>> >>>>  tx workers 16
>>> >>>>  rx queues 1
>>> >>>>  tx queues 1
>>> >>>>
>>> >>>> [01] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [02] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [03] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [04] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [05] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [06] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [07] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [08] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [09] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [10] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [11] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [12] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [13] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [14] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [15] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [16] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [17] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [18] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [19] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [20] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [21] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [22] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [23] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [24] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [25] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [26] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [27] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [28] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [29] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [30] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [31] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>> [32] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>>
>>> >>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>>> >>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>>> >>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>>> >>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>>> >>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>>> >>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>>> >>>> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
>>> >>>> 96 pps, 96 max pps,  0 rx drops, 0 tx drops
>>> >>>> 0 pps, 96 max pps,  0 rx drops, 0 tx drops
>>> >>>> 64 pps, 96 max pps,  0 rx drops, 0 tx drops
>>> >>>> 0 pps, 96 max pps,  0 rx drops, 0 tx drops
>>> >>>> 0 pps, 96 max pps,  0 rx drops, 0 tx drops
>>> >>>> 32 pps, 96 max pps,  0 rx drops, 0 tx drops
>>> >>>> 0 pps, 96 max pps,  0 rx drops, 0 tx drops
>>> >>>> 416 pps, 416 max pps,  0 rx drops, 0 tx drops
>>> >>>>
>>> >>>>
>>> >>>> ODP-DPDK example run details:
>>> >>>> -----------------------------
>>> >>>> root@ubuntu:/home/ubuntu/odp-dpdk/test/common_plat/performance#
>>> >>>    ./odp_l2fwd
>>> >>>> -i 0,1
>>> >>>> EAL: Detected 56 lcore(s)
>>> >>>> EAL: Probing VFIO support...
>>> >>>> EAL: PCI device 0000:05:00.0 on NUMA socket 0
>>> >>>> EAL:   probe driver: 8086:1528 net_ixgbe
>>> >>>> EAL: PCI device 0000:05:00.1 on NUMA socket 0
>>> >>>> EAL:   probe driver: 8086:1528 net_ixgbe
>>> >>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>>> >> failed:Cannot
>>> >>>> allocate memory
>>> >>>> ../linux-generic/_ishm.c:866:_odp_ishm_reserve():No huge pages,
>>> >>>    fall back
>>> >>>> to normal pages. check: /proc/sys/vm/nr_hugepages.
>>> >>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>>> >> failed:Cannot
>>> >>>> allocate memory
>>> >>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>>> >> failed:Cannot
>>> >>>> allocate memory
>>> >>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>>> >> failed:Cannot
>>> >>>> allocate memory
>>> >>>> PKTIO: initialized loop interface.
>>> >>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>>> >> failed:Cannot
>>> >>>> allocate memory
>>> >>>> No crypto devices available
>>> >>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>>> >> failed:Cannot
>>> >>>> allocate memory
>>> >>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>>> >> failed:Cannot
>>> >>>> allocate memory
>>> >>>> ../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap
>>> >> failed:Cannot
>>> >>>> allocate memory
>>> >>>>
>>> >>>> ODP system info
>>> >>>> ---------------
>>> >>>> ODP API version: 1.15.0
>>> >>>> ODP impl name:   odp-dpdk
>>> >>>> CPU model:       Intel(R) Xeon(R) CPU E5-2680 v4
>>> >>>> CPU freq (hz):   2400000000
>>> >>>> Cache line size: 64
>>> >>>> CPU count:       56
>>> >>>>
>>> >>>> Running ODP appl: "odp_l2fwd"
>>> >>>> -----------------
>>> >>>> IF-count:        2
>>> >>>> Using IFs:       0 1
>>> >>>> Mode:            PKTIN_DIRECT, PKTOUT_DIRECT
>>> >>>>
>>> >>>> num worker threads: 32
>>> >>>> first CPU:          24
>>> >>>> cpu mask:           0xFFFFFFFF000000
>>> >>>>
>>> >>>> mempool <packet pool>@0x7f1c7fe7de40
>>> >>>>  flags=10
>>> >>>>  pool=0x7f1c7e8ddcc0
>>> >>>>  phys_addr=0x17ffe7de40
>>> >>>>  nb_mem_chunks=1
>>> >>>>  size=8192
>>> >>>>  populated_size=8192
>>> >>>>  header_size=64
>>> >>>>  elt_size=2624
>>> >>>>  trailer_size=64
>>> >>>>  total_obj_size=2752
>>> >>>>  private_data_size=64
>>> >>>>  avg bytes/object=2752.000000
>>> >>>>  internal cache infos:
>>> >>>>    cache_size=512
>>> >>>>    cache_count[0]=0
>>> >>>>    cache_count[1]=0
>>> >>>>    cache_count[2]=0
>>> >>>>    cache_count[3]=0
>>> >>>>    cache_count[4]=0
>>> >>>>    cache_count[5]=0
>>> >>>>    cache_count[6]=0
>>> >>>>    cache_count[7]=0
>>> >>>>    cache_count[8]=0
>>> >>>>    cache_count[9]=0
>>> >>>>    cache_count[10]=0
>>> >>>>    cache_count[11]=0
>>> >>>>    cache_count[12]=0
>>> >>>>    cache_count[13]=0
>>> >>>>    cache_count[14]=0
>>> >>>>    cache_count[15]=0
>>> >>>>    cache_count[16]=0
>>> >>>>    cache_count[17]=0
>>> >>>>    cache_count[18]=0
>>> >>>>    cache_count[19]=0
>>> >>>>    cache_count[20]=0
>>> >>>>    cache_count[21]=0
>>> >>>>    cache_count[22]=0
>>> >>>>    cache_count[23]=0
>>> >>>>    cache_count[24]=0
>>> >>>>    cache_count[25]=0
>>> >>>>    cache_count[26]=0
>>> >>>>    cache_count[27]=0
>>> >>>>    cache_count[28]=0
>>> >>>>    cache_count[29]=0
>>> >>>>    cache_count[30]=0
>>> >>>>    cache_count[31]=0
>>> >>>>    cache_count[32]=0
>>> >>>>    cache_count[33]=0
>>> >>>>    cache_count[34]=0
>>> >>>>    cache_count[35]=0
>>> >>>>    cache_count[36]=0
>>> >>>>    cache_count[37]=0
>>> >>>>    cache_count[38]=0
>>> >>>>    cache_count[39]=0
>>> >>>>    cache_count[40]=0
>>> >>>>    cache_count[41]=0
>>> >>>>    cache_count[42]=0
>>> >>>>    cache_count[43]=0
>>> >>>>    cache_count[44]=0
>>> >>>>    cache_count[45]=0
>>> >>>>    cache_count[46]=0
>>> >>>>    cache_count[47]=0
>>> >>>>    cache_count[48]=0
>>> >>>>    cache_count[49]=0
>>> >>>>    cache_count[50]=0
>>> >>>>    cache_count[51]=0
>>> >>>>    cache_count[52]=0
>>> >>>>    cache_count[53]=0
>>> >>>>    cache_count[54]=0
>>> >>>>    cache_count[55]=0
>>> >>>>    cache_count[56]=0
>>> >>>>    cache_count[57]=0
>>> >>>>    cache_count[58]=0
>>> >>>>    cache_count[59]=0
>>> >>>>    cache_count[60]=0
>>> >>>>    cache_count[61]=0
>>> >>>>    cache_count[62]=0
>>> >>>>    cache_count[63]=0
>>> >>>>    cache_count[64]=0
>>> >>>>    cache_count[65]=0
>>> >>>>    cache_count[66]=0
>>> >>>>    cache_count[67]=0
>>> >>>>    cache_count[68]=0
>>> >>>>    cache_count[69]=0
>>> >>>>    cache_count[70]=0
>>> >>>>    cache_count[71]=0
>>> >>>>    cache_count[72]=0
>>> >>>>    cache_count[73]=0
>>> >>>>    cache_count[74]=0
>>> >>>>    cache_count[75]=0
>>> >>>>    cache_count[76]=0
>>> >>>>    cache_count[77]=0
>>> >>>>    cache_count[78]=0
>>> >>>>    cache_count[79]=0
>>> >>>>    cache_count[80]=0
>>> >>>>    cache_count[81]=0
>>> >>>>    cache_count[82]=0
>>> >>>>    cache_count[83]=0
>>> >>>>    cache_count[84]=0
>>> >>>>    cache_count[85]=0
>>> >>>>    cache_count[86]=0
>>> >>>>    cache_count[87]=0
>>> >>>>    cache_count[88]=0
>>> >>>>    cache_count[89]=0
>>> >>>>    cache_count[90]=0
>>> >>>>    cache_count[91]=0
>>> >>>>    cache_count[92]=0
>>> >>>>    cache_count[93]=0
>>> >>>>    cache_count[94]=0
>>> >>>>    cache_count[95]=0
>>> >>>>    cache_count[96]=0
>>> >>>>    cache_count[97]=0
>>> >>>>    cache_count[98]=0
>>> >>>>    cache_count[99]=0
>>> >>>>    cache_count[100]=0
>>> >>>>    cache_count[101]=0
>>> >>>>    cache_count[102]=0
>>> >>>>    cache_count[103]=0
>>> >>>>    cache_count[104]=0
>>> >>>>    cache_count[105]=0
>>> >>>>    cache_count[106]=0
>>> >>>>    cache_count[107]=0
>>> >>>>    cache_count[108]=0
>>> >>>>    cache_count[109]=0
>>> >>>>    cache_count[110]=0
>>> >>>>    cache_count[111]=0
>>> >>>>    cache_count[112]=0
>>> >>>>    cache_count[113]=0
>>> >>>>    cache_count[114]=0
>>> >>>>    cache_count[115]=0
>>> >>>>    cache_count[116]=0
>>> >>>>    cache_count[117]=0
>>> >>>>    cache_count[118]=0
>>> >>>>    cache_count[119]=0
>>> >>>>    cache_count[120]=0
>>> >>>>    cache_count[121]=0
>>> >>>>    cache_count[122]=0
>>> >>>>    cache_count[123]=0
>>> >>>>    cache_count[124]=0
>>> >>>>    cache_count[125]=0
>>> >>>>    cache_count[126]=0
>>> >>>>    cache_count[127]=0
>>> >>>>    total_cache_count=0
>>> >>>>  common_pool_count=8192
>>> >>>>  no statistics available
>>> >>>> ../linux-generic/odp_packet_io.c:226:setup_pktio_entry():Unable to
>>> >>>    init any
>>> >>>> I/O type.
>>> >>>> odp_l2fwd.c:642:create_pktio():Error: failed to open 0
>>> >>>>
>>> >>>>
>>> >>>> Thanks & Regards,
>>> >>>> P Gyanesh Kumar Patra
>>> >>>>
>>> >>>
>>> >>>
>>> >>
>>> >>
>>>
>>>
>>

Reply via email to