chirag-wadhwa5 commented on code in PR #17573:
URL: https://github.com/apache/kafka/pull/17573#discussion_r1814721833
##########
core/src/test/scala/unit/kafka/server/ShareFetchAcknowledgeRequestTest.scala:
##########
@@ -113,9 +111,8 @@ class ShareFetchAcknowledgeRequestTest(cluster:
ClusterInstance) extends GroupCo
// Send the share fetch request to the non-replica and verify the error
code
val shareFetchRequest = createShareFetchRequest(groupId, metadata,
MAX_PARTITION_BYTES, send, Seq.empty, Map.empty)
val shareFetchResponse =
connectAndReceive[ShareFetchResponse](shareFetchRequest, nonReplicaId)
- val partitionData =
shareFetchResponse.responseData(topicNames).get(topicIdPartition)
- assertEquals(Errors.NOT_LEADER_OR_FOLLOWER.code, partitionData.errorCode)
- assertEquals(leader, partitionData.currentLeader().leaderId())
+ // Top level error thrown while fetching the "LATEST" offset for the
partition during share partition initialization
+ assertEquals(Errors.NOT_LEADER_OR_FOLLOWER.code,
shareFetchResponse.data().errorCode())
Review Comment:
Thanks for the review. Previously, this error was being thrown from
KafkaApis.scala, because we were simply initializing the startOffset to 0
(because the persister always sends the default value for start offset). After
the config change, the share partition cannot even be initialized, because
either offsetForEarliestTimestamp or offsetForLatestTimestamp will throw an
error, because the current broker is not the leader for that partition. I think
this should be a top level error code, since the failure happened during share
partition initialization. Let me know what you think, thanks !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]