lhotari commented on issue #23451: URL: https://github.com/apache/pulsar/issues/23451#issuecomment-2411758844
> I believe the issue is that when the broker responds to a `PRODUCER` command, it calls `ServerCnx:: handleProducer`, which calls `BrokerService::getOrCreateTopic`, which calls `BrokerService::getTopic`, which calls `BrokerService::fetchPartitionedTopicMetadataAsync(TopicName topicName)`, which calls `BrokerService::fetchPartitionedTopicMetadataAsync(TopicName topicName, boolean refreshCacheAndGet)`, with `refreshCacheAndGet` set to false. This means that `NamespaceResources:: getPartitionedTopicMetadataAsync` is called with `refresh` always `false`, which means that `getAsync` is called on `NamespaceResources` rather than `refreshAndGetAsync`. This means that the `sync` call on Zookeeper is not called before performing the read. Good observations @dwang-qm! There have been multiple issues in the past with topic metadata handling due to the undefined consistency model. 2 years ago I wrote a blog post about this and the ["Metadata consistency issues from user’s point of view"](https://codingthestreams.com/pulsar/2022/10/18/view-of-pulsar-metadata-store.html#metadata-consistency-issues-from-users-point-of-view) part describes the state of problems at that time. For topic creation, deletion and partition modification, it might still be the case that operations aren't strongly consistent as you have observed. The current workarounds are around retries. Some improvements have been made over time in individual use cases, such as by using the ZK `sync` support, it was introduced in #18518. I guess solving the metadata inconsistencies should start by defining the consistency model. It's currently eventually consistent for many cases which is surprising for many users since it's not well defined. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
