nodece commented on code in PR #22577:
URL: https://github.com/apache/pulsar/pull/22577#discussion_r1579088423
##########
pulsar-broker/src/test/java/org/apache/pulsar/broker/service/OneWayReplicatorTest.java:
##########
@@ -652,4 +652,63 @@ public void testUnFenceTopicToReuse() throws Exception {
admin2.topics().delete(topicName);
});
}
+
+ @Test
+ public void testNamespaceLevelReplicationRemoteConflictTopicExist() throws
Exception {
+ final String topicName = BrokerTestUtil.newUniqueName("persistent://"
+ replicatedNamespace + "/tp");
+ // Verify: will get a not found error when calling
"getPartitionedTopicMetadata" on a topic not exists.
+ try {
+ admin1.topics().getPartitionedTopicMetadata(topicName);
+ fail("Expected a not found error");
+ } catch (Exception ex) {
+ Throwable unWrapEx = FutureUtil.unwrapCompletionException(ex);
+ assertTrue(unWrapEx.getMessage().contains("not found"));
+ }
+ // Verify: will get a conflict error when there is a topic with
different partitions on the remote side.
+ admin2.topics().createPartitionedTopic(topicName, 1);
+ try {
+ admin1.topics().createPartitionedTopic(topicName, 2);
Review Comment:
I want to explain the replicator behavior.
The geo-replication is enabled on the namespace, `topic-1` has 1 partitions
on the remote cluster, and you will create a `topic-1` topic with 3 partitions
on the local cluster, which can be created.
When you send the message to `topic-1` from the local cluster, the
replicator will be created, partition 1 and 2 topics will become non-partition
topics on the remote cluster.
| local cluster | remote cluster |
| -- | -- |
| topic-1-parition-0(partitioned) | topic-1-partition-0(partitioned) |
| topic-1-parition-1(partitioned) | topic-1-parition-1(non-partition) |
| topic-1-parition-2(partitioned) | topic-1-parition-2(non-partition) |
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]