merlimat commented on issue #127:
URL:
https://github.com/apache/pulsar-client-python/issues/127#issuecomment-1572373182
I was able to reproduce on 3.1.0 client.
```
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
* frame #0: 0x000000018577ebc8 libsystem_kernel.dylib`__psynch_mutexwait +
8
frame #1: 0x00000001857b90c4
libsystem_pthread.dylib`_pthread_mutex_firstfit_lock_wait + 84
frame #2: 0x00000001857b6a5c
libsystem_pthread.dylib`_pthread_mutex_firstfit_lock_slow + 248
frame #3: 0x00000001074225fc _pulsar.cpython-311-darwin.so`unsigned long
boost::asio::detail::kqueue_reactor::cancel_timer<boost::asio::time_traits<boost::posix_time::ptime>
>(boost::asio::detail::timer_queue<boost::asio::time_traits<boost::posix_time::ptime>
>&,
boost::asio::detail::timer_queue<boost::asio::time_traits<boost::posix_time::ptime>
>::per_timer_data&, unsigned long) + 56
frame #4: 0x0000000107569680
_pulsar.cpython-311-darwin.so`pulsar::ProducerImpl::shutdown() + 260
frame #5: 0x000000010755daa4
_pulsar.cpython-311-darwin.so`pulsar::ProducerImpl::~ProducerImpl() + 96
frame #6: 0x0000000107414878
_pulsar.cpython-311-darwin.so`pybind11::class_<pulsar::Producer>::dealloc(pybind11::detail::value_and_holder&)
+ 132
frame #7: 0x00000001073b2c6c
_pulsar.cpython-311-darwin.so`pybind11::detail::clear_instance(_object*) + 396
frame #8: 0x00000001073b28dc
_pulsar.cpython-311-darwin.so`pybind11_object_dealloc + 20
```
My impression is that this is not related to the Python wrapper, but rather
to the locks state in Boost Asio after the fork.
In the case above, it just got stuck while trying to cancel a timer. The
lock on the timer was not held by anyone else.
Perhaps it's possible that the timer was being triggered when the fork
happened and it left the mutex locked in the child process.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]