artemlivshits commented on code in PR #12462:
URL: https://github.com/apache/kafka/pull/12462#discussion_r938985421


##########
clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java:
##########
@@ -273,26 +273,29 @@ public RecordAppendResult append(String topic,
 
                 // check if we have an in-progress batch
                 Deque<ProducerBatch> dq = 
topicInfo.batches.computeIfAbsent(effectivePartition, k -> new ArrayDeque<>());
+                RecordAppendResult appendResult;
                 synchronized (dq) {
                     // After taking the lock, validate that the partition 
hasn't changed and retry.
                     if 
(topicInfo.builtInPartitioner.isPartitionChanged(partitionInfo)) {
                         log.trace("Partition {} for topic {} switched by a 
concurrent append, retrying",
                                 partitionInfo.partition(), topic);
                         continue;
                     }
-                    RecordAppendResult appendResult = tryAppend(timestamp, 
key, value, headers, callbacks, dq, nowMs);
-                    if (appendResult != null) {
+                    appendResult = tryAppend(timestamp, key, value, headers, 
callbacks, dq, nowMs);
+                    if (appendResult != null && !appendResult.newBatchCreated) 
{

Review Comment:
   The built-in partitioner needs to know the size of every append, otherwise 
it wouldn't be able to properly switch partitions.  In particular, if records 
are greater than batch size and always create a new batch, it looks like the 
size won't be updated and partition won't get switched.  Can we add a unit test 
to check that partitions get switched when records are greater than the batch 
size?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to