On 9/16/2019 8:40 AM, Rajesh Kumar wrote:
Hi,

Sorry, Didn't complete my previous mail.


Hi Rajesh, apologies for the delay on my part in responding, I've been out of office the past few weeks.

The errors I was getting are
1)
root@basepdump-67b4b44448-lt8wf:/# pdump
EAL: Detected 2 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_12_19ffcc06b54
EAL: Probing VFIO support...
EAL: Cannot initialize tailq: RTE_EVENT_RING
Tailq 0: qname:<UIO_RESOURCE_LIST>, tqh_first:(nil), tqh_last:0x7fda1b17d47c
Tailq 1: qname:<VFIO_RESOURCE_LIST>, tqh_first:(nil), tqh_last:0x7fda1b17d4ac
Tailq 2: qname:<RTE_RING>, tqh_first:0x108064900, tqh_last:0x108064900
Tailq 3: qname:<RTE_HASH>, tqh_first:(nil), tqh_last:0x7fda1b17d50c
.............................
EAL: FATAL: Cannot init tail queues for objects
EAL: Cannot init tail queues for objects
PANIC in main():
Cannot init EAL
5: [pdump(+0x2e2a) [0x557832863e2a]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7fe128dd809b]]
3: [pdump(+0x233a) [0x55783286333a]]
2: [/usr/lib/x86_64-linux-gnu/librte_eal.so.18.11(__rte_panic+0xbd) [0x7fe1292b0ca5]] 1: [/usr/lib/x86_64-linux-gnu/librte_eal.so.18.11(rte_dump_stack+0x2e) [0x7fe1292c65be]]
Aborted (core dumped)


2)
root@basepdump-67b4b44448-lt8wf:/# pdump
EAL: Detected 2 lcore(s)
EAL: Detected 1 NUMA nodes
PANIC in rte_eal_config_reattach():
Cannot mmap memory for rte_config at [(nil)], got [0x7ffff...] - please use '--base-virtaddr' option
6: [./dpdk-pdump(start+0x2a) [0x5555559c7aa]]
5:[/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7fe128dd809b]]
4: [./dpdk-pdump(main+0xe2) [0x555555597dd2]]
3: [./dpdk-pdump(rte_eal_init+0xc06) [0x555555678416]]
..........
Aborted (core dumped)


From the logs above it looks like the secondary process is unable to access the config between the pods. I'm unsure if this is possible myself as I haven't tried this setup before with pdump.

Can I ask if you are specifically sharing the process configs between the pods? Also are you sharing hugepages between the pods and if so, what steps were taken to ensure this?


Attached the same errors also.

I need in help in figuring out where I'm going wrong.


We'll try to recreate this in or lab setup also as in theory this should work.

Regards
Ian




Thanks,
Rajesh kumar S R


------------------------------------------------------------------------
*From:* ovs-discuss-boun...@openvswitch.org <ovs-discuss-boun...@openvswitch.org> on behalf of Rajesh Kumar <rajesh.ku...@certesnetworks.com>
*Sent:* Monday, September 16, 2019 1:00:56 PM
*To:* ovs-discuss@openvswitch.org
*Subject:* [ovs-discuss] OVS - PDUMP: Pdump initialization failure in different container

In our kubernetes setup, we are running OVS in a pod with dpdk enabled.

Using 18.11.2.

I wanted to use dpdk-pdump as packet capture tool and trying to run pdump in separate pod.

As pdump is a secondary process, it will map to the hugepages allocated by primary process (OVS-vswitchd).

I'm getting these 2 errors while starting PDUMP as secondary process in a separate pod.



Without the container setup, I was able to bringup pdump with OVS





_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to