vamossagar12 commented on a change in pull request #11424:
URL: https://github.com/apache/kafka/pull/11424#discussion_r739158573
##########
File path:
streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java
##########
@@ -736,11 +734,8 @@ public boolean process(final long wallClockTime) {
consumedOffsets.put(partition, record.offset());
commitNeeded = true;
- // after processing this record, if its partition queue's buffered
size has been
- // decreased to the threshold, we can then resume the consumption
on this partition
- if (recordInfo.queue().size() == maxBufferedSize) {
- mainConsumer.resume(singleton(partition));
- }
Review comment:
I see... but if both the configs are in play, won't that also lead to
pause/resume of partitions based upon whichever threshold is breached? Maybe we
can add some check that we do this only if maxBufferedSize is set to some
value? We might also want to consider if it has a default value and use the
condition accordingly,
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]