Github user koeninger commented on the issue:

    https://github.com/apache/spark/pull/21917
  
    If the last offset in the range as calculated by the driver is 5, and on 
the executor all you can poll up to after a repeated attempt is 3, and the user 
already told you to allowNonConsecutiveOffsets... then you're done, no error.
    
    Why does it matter if you do this logic when you're reading all the 
messages in advance and counting, or when you're actually computing? 
    
    To put it another way, this PR is a lot of code change and refactoring, why 
not just change the logic of e.g. how CompactedKafkaRDDIterator interacts with 
compactedNext?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to