lianetm commented on code in PR #19885:
URL: https://github.com/apache/kafka/pull/19885#discussion_r2534735107
##########
clients/src/main/java/org/apache/kafka/clients/consumer/internals/CommitRequestManager.java:
##########
@@ -1124,22 +1123,24 @@ private void onSuccess(final long currentTimeMs,
var failedRequestRegistered = false;
for (var topic : response.topics()) {
+ // If the topic id is used, the topic name is empty in the
response.
+ String topicName = topic.name().isEmpty() ?
metadata.topicNames().get(topic.topicId()) : topic.name();
for (var partition : topic.partitions()) {
var tp = new TopicPartition(
- topic.name(),
+ topicName,
partition.partitionIndex()
);
var error = Errors.forCode(partition.errorCode());
- if (error != Errors.NONE) {
+ if (error != Errors.NONE || topicName == null) {
log.debug("Failed to fetch offset for partition {}:
{}", tp, error.message());
Review Comment:
this log will show weird if the case is error=none and topicName=null (tp
without name and empty error message I expect). Should we differentiate?
##########
clients/src/test/java/org/apache/kafka/clients/consumer/internals/CommitRequestManagerTest.java:
##########
@@ -751,6 +751,59 @@ public void
testOffsetFetchRequestEnsureDuplicatedRequestSucceed() {
assertEmptyPendingRequests(commitRequestManager);
}
+ @Test
+ public void testOffsetFetchRequestShouldSucceedWithTopicId() {
+ CommitRequestManager commitRequestManager = create(true, 100);
+
when(coordinatorRequestManager.coordinator()).thenReturn(Optional.of(mockedNode));
+ Uuid topicId = Uuid.randomUuid();
+ when(metadata.topicIds()).thenReturn(Map.of("t1", topicId));
+ when(metadata.topicNames()).thenReturn(Map.of(topicId, "t1"));
+ Set<TopicPartition> partitions = new HashSet<>();
+ partitions.add(new TopicPartition("t1", 0));
+
+ List<CompletableFuture<Map<TopicPartition, OffsetAndMetadata>>>
futures = sendAndVerifyDuplicatedOffsetFetchRequests(
+ commitRequestManager,
+ partitions,
+ 2,
+ Errors.NONE,
+ true,
+ topicId);
+ futures.forEach(f -> {
+ assertTrue(f.isDone());
+ assertFalse(f.isCompletedExceptionally());
+ });
+ // expecting the buffers to be emptied after being completed
successfully
+ commitRequestManager.poll(0);
+ assertEmptyPendingRequests(commitRequestManager);
+ }
+
+ @Test
+ public void
testOffsetFetchRequestShouldFailedWithTopicIdWhenMetadataUnknownResponseTopicId()
{
Review Comment:
typo, shouldFailWith
##########
clients/src/test/java/org/apache/kafka/clients/consumer/internals/CommitRequestManagerTest.java:
##########
@@ -751,6 +751,59 @@ public void
testOffsetFetchRequestEnsureDuplicatedRequestSucceed() {
assertEmptyPendingRequests(commitRequestManager);
}
+ @Test
+ public void testOffsetFetchRequestShouldSucceedWithTopicId() {
+ CommitRequestManager commitRequestManager = create(true, 100);
+
when(coordinatorRequestManager.coordinator()).thenReturn(Optional.of(mockedNode));
+ Uuid topicId = Uuid.randomUuid();
+ when(metadata.topicIds()).thenReturn(Map.of("t1", topicId));
+ when(metadata.topicNames()).thenReturn(Map.of(topicId, "t1"));
+ Set<TopicPartition> partitions = new HashSet<>();
+ partitions.add(new TopicPartition("t1", 0));
+
+ List<CompletableFuture<Map<TopicPartition, OffsetAndMetadata>>>
futures = sendAndVerifyDuplicatedOffsetFetchRequests(
+ commitRequestManager,
+ partitions,
+ 2,
+ Errors.NONE,
+ true,
+ topicId);
+ futures.forEach(f -> {
+ assertTrue(f.isDone());
+ assertFalse(f.isCompletedExceptionally());
+ });
+ // expecting the buffers to be emptied after being completed
successfully
+ commitRequestManager.poll(0);
+ assertEmptyPendingRequests(commitRequestManager);
+ }
+
+ @Test
+ public void
testOffsetFetchRequestShouldFailedWithTopicIdWhenMetadataUnknownResponseTopicId()
{
+ CommitRequestManager commitRequestManager = create(true, 100);
+
when(coordinatorRequestManager.coordinator()).thenReturn(Optional.of(mockedNode));
+ Uuid topicId = Uuid.randomUuid();
+ when(metadata.topicIds()).thenReturn(Map.of("t1", topicId));
+ // Mock the scenario where the topicID from the response is not in the
metadata.
+ when(metadata.topicNames()).thenReturn(Map.of());
+ Set<TopicPartition> partitions = new HashSet<>();
+ partitions.add(new TopicPartition("t1", 0));
+
+ List<CompletableFuture<Map<TopicPartition, OffsetAndMetadata>>>
futures = sendAndVerifyDuplicatedOffsetFetchRequests(
+ commitRequestManager,
+ partitions,
+ 2,
+ Errors.NONE,
+ true,
+ topicId);
+ futures.forEach(f -> {
+ assertTrue(f.isDone());
+ assertTrue(f.isCompletedExceptionally());
+ });
Review Comment:
should we extend this to validate it completes with the error we expect?
(and all the other actions that we expect on error) we can maybe reuse
validation we do for the error cases (that do not include topic ID, so don't
cover this case)
https://github.com/apache/kafka/blob/169e21199791d02d195a739044734b89847266af/clients/src/test/java/org/apache/kafka/clients/consumer/internals/CommitRequestManagerTest.java#L769
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]