vamossagar12 commented on PR #13646:
URL: https://github.com/apache/kafka/pull/13646#issuecomment-1527703722

   Yeah I agree with @yashmayya . Moreover this 
   
   ```
   Yes it will fail, but consumeAll is not failing due to timeout here but 
rather due to its nature of storing the end offsets before consuming.
   ```
   
   is not entirely correct i think. I agree what gets thrown in an 
AssertionError but thats because the number of sourceRecords returned by 
`consumeAll` didn't meet the desired number of records within 60s. For 
starters, can you try increasing `CONSUME_RECORDS_TIMEOUT_MS` to 100s or as 
such and see if it even works? Basically, we need to check if consumer is 
lagging or are enough records being produced? I i think  it would mostly be the 
former because as Yash said, we are anyways waiting for 100 records to be 
committed. It's not an ideal fix but let's first see if it works and if needed 
we can dig deeper.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to