BewareMyPower commented on issue #127:
URL:
https://github.com/apache/pulsar-client-python/issues/127#issuecomment-1571946717
Unfortunately this script cannot reproduce the deadlock in my local env. I
started a Pulsar standalone 2.11.1 and installed the Python client 3.1.0 on
Ubuntu 22.04 with Python 3.8. Then I just modified the topic name to `my-topic`
and run `while true; do python3 repro.py; done`. The loop has been running for
about 30 minutes and it still worked well.
> 2023-06-01T19:36:56,301+0800 [pulsar-io-19-11] INFO
org.apache.pulsar.broker.service.ServerCnx - [/127.0.0.1:46530] Created new
producer:
Producer{topic=PersistentTopic{topic=persistent://public/default/my-topic},
client=/127.0.0.1:46530, producerName=standalone-0-2414, producerId=0}
As you can see the suffix of the automatically generated producer name is
2414 so the script has been executed for 2000+ times.
From the stack you provided, it's stuck at `ClientConnection::closeSocket`
when clearing the connections in the pool. Normally, it should just fail with
`ConnectError`. Could you provide some latest logs when the deadlock happened
in your environment so I can see the difference?
Though I think this issue should still exist for the latest code, could you
also help verify the 3.2.0 candidate 1?
https://dist.apache.org/repos/dist/release/pulsar/pulsar-client-python-3.1.0/
I believe it's because the C++ destructors have done many unnecessary things
in the destructors. We should keep the destructors simple. Here is another
issue related to the destructor:
https://github.com/apache/pulsar-client-python/issues/103
The simplest solution might be adding an option to skip all `shutdown` calls
for these C++ classes. But it would be better if I can reproduce it locally
then I can verify if this fix works.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]