Thank you Dave!

Your reply sounds good.



On Wed, Jun 20, 2018 at 5:00 PM, Dave Barach (dbarach) <[email protected]>
wrote:

> If either vpp or client dies with the svm mutex held, the condition you
> show can occur. Even marginally well-behaved code won’t cause this
> condition, so I’d suggest that you fix the underlying problem.
>
>
>
> To get going again: “rm /dev/shm/{global_vm,vpe-api}” to clean up the mess.
>
>
>
> Note that the packaged vpp service cleans up in this exact way:
>
>
>
> [Unit]
>
> Description=vector packet processing engine
>
> After=network.target
>
>
>
> [Service]
>
> Type=simple
>
> ExecStartPre=-/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/vpe-api
>
> ExecStartPre=-/sbin/modprobe uio_pci_generic
>
> ExecStart=/usr/bin/vpp -c /etc/vpp/startup.conf
>
> ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/vpe-api
>
> Restart=always
>
>
>
> [Install]
>
> WantedBy=multi-user.target
>
>
>
>
>
>
>
> *From:* [email protected] <[email protected]> *On Behalf Of *chetan
> bhasin
> *Sent:* Wednesday, June 20, 2018 6:29 AM
> *To:* [email protected]
> *Subject:* [vpp-dev] Mutex Deadlock
>
>
>
> Hi ,
>
>
>
> I am facing some mutex deadlock between Client application and vpp. So for
> example we see Client App attempt the connect and invoke vl_client_connect
> grabs the  pthread_mutex_lock (&svm->mutex); then subsequently dies later
> due to some reason.
>
>
>
> We then have VPP hanging in dead_client_scan and on subsequent Client app
> restarts we see
>
>
>
> Thread 1 (Thread 0x2b6c4a79fa40 (LWP 5201)):
>
> #0  0x00002b6c4b82942d in __lll_lock_wait () from /lib64/libpthread.so.0
>
> #1  0x00002b6c4b824dcb in _L_lock_812 () from /lib64/libpthread.so.0
>
> #2  0x00002b6c4b824c98 in pthread_mutex_lock () from /lib64/libpthread.so.0
>
> #3  0x00002b6c5cf03956 in region_lock (rp=rp@entry=0x3002d000,
> tag=tag@entry=2) at /bfs-build/build-area.32/builds/LinuxNBngp_mainline-
> dpdk_RH7/2018-06-13-2241/third-party/vpp/vpp_1801/
> build-data/../src/svm/svm.c:64
>
> #4  0x00002b6c5cf0636b in svm_map_region (a=a@entry=0x7fff42d41ef0) at
> /bfs-build/build-area.32/builds/LinuxNBngp_mainline-
> dpdk_RH7/2018-06-13-2241/third-party/vpp/vpp_1801/
> build-data/../src/svm/svm.c:711
>
> #5  0x00002b6c5cf06b4f in svm_region_find_or_create (a=a@entry=0x7fff42d41ef0)
> at /bfs-build/build-area.32/builds/LinuxNBngp_mainline-
> dpdk_RH7/2018-06-13-2241/third-party/vpp/vpp_1801/
> build-data/../src/svm/svm.c:892
>
> #6  0x00002b6c5ccf0f59 in vl_map_shmem 
> (region_name=region_name@entry=0x2b6c5d322150
> <api_map.15848> "/vpe-api", is_vlib=is_vlib@entry=0) at
> /bfs-build/build-area.32/builds/LinuxNBngp_mainline-
> dpdk_RH7/2018-06-13-2241/third-party/vpp/vpp_1801/
> build-data/../src/vlibmemory/memory_shared.c:485
>
> #7  0x00002b6c5ccef8eb in vl_client_api_map 
> (region_name=region_name@entry=0x2b6c5d322150
> <api_map.15848> "/vpe-api") at /bfs-build/build-area.32/
> builds/LinuxNBngp_mainline-dpdk_RH7/2018-06-13-2241/
> third-party/vpp/vpp_1801/build-data/../src/vlibmemory/memory_client.c:390
>
> #8  0x00002b6c5d11ff42 in vapi_connect (ctx=0x10ab8780, name=0xef71698
> "opwv_ats_client-0x2792000", chroot_prefix=0x0, 
> max_outstanding_requests=<optimized
> out>, response_queue_size=15000, mode=VAPI_MODE_NONBLOCKING) at
> /bfs-build/build-area.32/builds/LinuxNBngp_mainline-
> dpdk_RH7/2018-06-13-2241/third-party/vpp/vpp_1801/
> build-data/../src/vpp-api/vapi/vapi.c:319
>
>
>
> Any ideas/suggestions?
>
>
>
>
>
> Thanks,
>
> Chetan Bhasin
>
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9654): https://lists.fd.io/g/vpp-dev/message/9654
Mute This Topic: https://lists.fd.io/mt/22448642/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to