chia7712 commented on code in PR #21396:
URL: https://github.com/apache/kafka/pull/21396#discussion_r2776230049
##########
coordinator-common/src/main/java/org/apache/kafka/coordinator/common/runtime/CoordinatorRuntime.java:
##########
@@ -879,15 +880,23 @@ private void maybeFlushCurrentBatch(long currentTimeMs) {
}
}
+ private void failCurrentBatch(Throwable t) {
+ failCurrentBatch(t, true);
+ }
+
+ private void failCurrentBatchWithoutRelease(Throwable t) {
+ failCurrentBatch(t, false);
+ }
+
/**
* Fails the current batch, reverts to the snapshot to the base/start
offset of the
* batch, fails all the associated events.
*/
- private void failCurrentBatch(Throwable t) {
+ private void failCurrentBatch(Throwable t, boolean freeCurrentBatch) {
if (currentBatch != null) {
coordinator.revertLastWrittenOffset(currentBatch.baseOffset);
currentBatch.deferredEvents.complete(t);
- freeCurrentBatch();
+ if (freeCurrentBatch) freeCurrentBatch();
Review Comment:
> I was discussing with @squah-confluent offline and we thought that caching
maxBatchSize when the batch is allocated may be easier as it would avoid
calling partitionWriter.config(tp) in freeCurrentBatch.
yes, we could use `int maxBatchSize = currentBatch.maxBatchSize;` to fix the
bug. Even if it is not the "latest" configuration, the only downside is that we
might miss a buffer recycle opportunity, which is acceptable
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]