artemlivshits commented on code in PR #12462:
URL: https://github.com/apache/kafka/pull/12462#discussion_r947453899
##########
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java:
##########
@@ -273,26 +273,29 @@ public RecordAppendResult append(String topic,
// check if we have an in-progress batch
Deque<ProducerBatch> dq =
topicInfo.batches.computeIfAbsent(effectivePartition, k -> new ArrayDeque<>());
+ RecordAppendResult appendResult;
synchronized (dq) {
// After taking the lock, validate that the partition
hasn't changed and retry.
if
(topicInfo.builtInPartitioner.isPartitionChanged(partitionInfo)) {
log.trace("Partition {} for topic {} switched by a
concurrent append, retrying",
partitionInfo.partition(), topic);
continue;
}
- RecordAppendResult appendResult = tryAppend(timestamp,
key, value, headers, callbacks, dq, nowMs);
- if (appendResult != null) {
+ appendResult = tryAppend(timestamp, key, value, headers,
callbacks, dq, nowMs);
+ if (appendResult != null && !appendResult.newBatchCreated)
{
topicInfo.builtInPartitioner.updatePartitionInfo(partitionInfo,
appendResult.appendedBytes, cluster);
return appendResult;
}
}
- // we don't have an in-progress record batch try to allocate a
new batch
- if (abortOnNewBatch) {
- // Return a result that will cause another call to append.
+ // either 1. current topicPartition producerBatch is full -
return and prepare for another batch/partition.
+ // 2. no producerBatch existed for this topicPartition, create
a new producerBatch.
+ if (appendResult == null && abortOnNewBatch) {
Review Comment:
It looks like there are still cases when onNewBatch logic is invoked and
partition could be switched for the second time, and it seems to change the
behavior of DefaultPartitioner - if the batch is sent (removed from the queue)
before a new record is produced, onNewBatch isn't going to get called and it'll
get stuck with the current partition.
##########
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java:
##########
@@ -273,26 +273,29 @@ public RecordAppendResult append(String topic,
// check if we have an in-progress batch
Deque<ProducerBatch> dq =
topicInfo.batches.computeIfAbsent(effectivePartition, k -> new ArrayDeque<>());
+ RecordAppendResult appendResult;
synchronized (dq) {
// After taking the lock, validate that the partition
hasn't changed and retry.
if
(topicInfo.builtInPartitioner.isPartitionChanged(partitionInfo)) {
log.trace("Partition {} for topic {} switched by a
concurrent append, retrying",
partitionInfo.partition(), topic);
continue;
}
- RecordAppendResult appendResult = tryAppend(timestamp,
key, value, headers, callbacks, dq, nowMs);
- if (appendResult != null) {
+ appendResult = tryAppend(timestamp, key, value, headers,
callbacks, dq, nowMs);
+ if (appendResult != null && !appendResult.newBatchCreated)
{
Review Comment:
I see, missed the new case in tryAppend
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]