ableegoldman commented on a change in pull request #11424:
URL: https://github.com/apache/kafka/pull/11424#discussion_r740701956



##########
File path: 
streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
##########
@@ -736,11 +734,8 @@ public boolean process(final long wallClockTime) {
             consumedOffsets.put(partition, record.offset());
             commitNeeded = true;
 
-            // after processing this record, if its partition queue's buffered 
size has been
-            // decreased to the threshold, we can then resume the consumption 
on this partition
-            if (recordInfo.queue().size() == maxBufferedSize) {
-                mainConsumer.resume(singleton(partition));
-            }

Review comment:
       >  Maybe we can add some check that we do this only if maxBufferedSize 
is set to some value?
   
   Yeah, sorry, I should have been more clear here -- we only need to continue 
doing this if the user is still setting the `buffered.records.per.partition` 
config. I mentioned this in another comment, but just in case you weren't 
aware, you can call `originals()` on a StreamsConfig object to get a map of the 
actual configs passed in by the user -- that way you know what they actually 
set vs what's just a default.
   
   Then you can just set `maxBufferedSize` to null or define a static constant 
`NOT_SET = -1` and then only continue doing this partition-level pause/resume 
if the user is still using `buffered.records.per.partition`. Does that make 
sense?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to