adixitconfluent commented on code in PR #20246:
URL: https://github.com/apache/kafka/pull/20246#discussion_r2447161718
##########
core/src/main/java/kafka/server/share/SharePartition.java:
##########
@@ -817,9 +832,13 @@ public ShareAcquiredRecords acquire(
// Do not send max fetch records to
acquireSubsetBatchRecords as we want to acquire
// all the records from the batch as the batch will anyway
be part of the file-records
// response batch.
- int acquiredSubsetCount =
acquireSubsetBatchRecords(memberId, firstBatch.baseOffset(),
lastOffsetToAcquire, inFlightBatch, result);
+ int acquiredSubsetCount =
acquireSubsetBatchRecords(memberId, isRecordLimitMode, maxFetchRecords,
firstBatch.baseOffset(), lastOffsetToAcquire, inFlightBatch, result);
Review Comment:
We would want to change this comment now since we are sending
`maxFetchRecords` to the method
##########
core/src/main/java/kafka/server/share/SharePartition.java:
##########
@@ -1596,9 +1628,45 @@ private ShareAcquiredRecords acquireNewBatchRecords(
}
}
+ private AcquiredRecords filterShareAcquiredRecordsInRecordLimitMode(int
maxFetchRecords, List<AcquiredRecords> acquiredRecords) {
+ // Only acquire one single batch in record limit mode.
+ AcquiredRecords records = acquiredRecords.get(0);
+ InFlightBatch inFlightBatch = cachedState.get(records.firstOffset());
+ long lastOffset = records.firstOffset() + maxFetchRecords - 1;
+ if (inFlightBatch != null) {
+ // Initialize the offset state map if not already initialized.
+ inFlightBatch.maybeInitializeOffsetStateUpdate();
+ List<PersisterBatch> persisterBatches = new ArrayList<>();
+ CompletableFuture<Void> future = new CompletableFuture<>();
+ NavigableMap<Long, InFlightState> offsetStateMap =
inFlightBatch.offsetState();
+ for (Map.Entry<Long, InFlightState> offsetState :
offsetStateMap.tailMap(lastOffset, false).entrySet()) {
+ if (offsetState.getValue().state() == RecordState.ACQUIRED) {
Review Comment:
1. I guess there should be some logging for the `else` corresponding to this
`if`
2. Question: Do we need to adjust the `endOffset` in this function?
##########
core/src/main/java/kafka/server/share/SharePartition.java:
##########
@@ -1596,9 +1628,45 @@ private ShareAcquiredRecords acquireNewBatchRecords(
}
}
+ private AcquiredRecords filterShareAcquiredRecordsInRecordLimitMode(int
maxFetchRecords, List<AcquiredRecords> acquiredRecords) {
Review Comment:
This function should be within write lock
##########
core/src/main/java/kafka/server/share/SharePartition.java:
##########
@@ -1709,6 +1780,10 @@ private int acquireSubsetBatchRecords(
.setLastOffset(offsetState.getKey())
.setDeliveryCount((short)
offsetState.getValue().deliveryCount()));
acquiredCount++;
+ if (isRecordLimitMode && acquiredCount >= maxFetchRecords) {
Review Comment:
should the condition `acquiredCount > maxFetchRecords` be allowed, I think
it should only be `acquiredCount == maxFetchRecords`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]