chia7712 commented on code in PR #16873:
URL: https://github.com/apache/kafka/pull/16873#discussion_r1735472015
##########
core/src/main/scala/kafka/server/KafkaApis.scala:
##########
@@ -1103,35 +1103,41 @@ class KafkaApis(val requestChannel: RequestChannel,
val responseTopics = authorizedRequestInfo.map { topic =>
val responsePartitions = topic.partitions.asScala.map { partition =>
- val topicPartition = new TopicPartition(topic.name,
partition.partitionIndex)
-
- try {
- val offsets = replicaManager.legacyFetchOffsetsForTimestamp(
- topicPartition = topicPartition,
- timestamp = partition.timestamp,
- maxNumOffsets = partition.maxNumOffsets,
- isFromConsumer = offsetRequest.replicaId ==
ListOffsetsRequest.CONSUMER_REPLICA_ID,
- fetchOnlyFromLeader = offsetRequest.replicaId !=
ListOffsetsRequest.DEBUGGING_REPLICA_ID)
+ if (partition.timestamp() < ListOffsetsRequest.EARLIEST_TIMESTAMP) {
new ListOffsetsPartitionResponse()
.setPartitionIndex(partition.partitionIndex)
- .setErrorCode(Errors.NONE.code)
- .setOldStyleOffsets(offsets.map(JLong.valueOf).asJava)
- } catch {
- // NOTE: UnknownTopicOrPartitionException and
NotLeaderOrFollowerException are special cases since these error messages
- // are typically transient and there is no value in logging the
entire stack trace for the same
- case e @ (_ : UnknownTopicOrPartitionException |
- _ : NotLeaderOrFollowerException |
- _ : KafkaStorageException) =>
- debug("Offset request with correlation id %d from client %s on
partition %s failed due to %s".format(
- correlationId, clientId, topicPartition, e.getMessage))
- new ListOffsetsPartitionResponse()
- .setPartitionIndex(partition.partitionIndex)
- .setErrorCode(Errors.forException(e).code)
- case e: Throwable =>
- error("Error while responding to offset request", e)
+ .setErrorCode(Errors.UNSUPPORTED_VERSION.code)
+ } else {
+ val topicPartition = new TopicPartition(topic.name,
partition.partitionIndex)
+
+ try {
+ val offsets = replicaManager.legacyFetchOffsetsForTimestamp(
Review Comment:
Dear all, do we reach the consensus on any approach to validate MV for tier
storage?
1. RPC version check: works with dynamical MV but cannot do fail-fast
2. RLM initialization check: good at fail-fast but cannot check dynamic MV
after initialization. BTW, there was an issue
(https://issues.apache.org/jira/browse/KAFKA-16790) used to change the
initialization order of RLM and metadata publisher.
3. Set min MV to 3.6 when RLM is enabled. Good at fail-fast and dynamic MV
check. However, it is a bit weird to change the min MV by the
non-feature-version.
4. Ship tier storage to feature version? Too large to be in 3.9
5. Let it go. We have announced the requirement of tier storage, so users
should be aware of that!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]