Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15820#discussion_r88765764
  
    --- Diff: 
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
 ---
    @@ -47,40 +51,190 @@ private[kafka010] case class CachedKafkaConsumer 
private(
     
       /** Iterator to the already fetch data */
       private var fetchedData = 
ju.Collections.emptyIterator[ConsumerRecord[Array[Byte], Array[Byte]]]
    -  private var nextOffsetInFetchedData = -2L
    +  private var nextOffsetInFetchedData = UNKNOWN_OFFSET
     
       /**
    -   * Get the record for the given offset, waiting up to timeout ms if IO 
is necessary.
    -   * Sequential forward access will use buffers, but random access will be 
horribly inefficient.
    +   * Get the record for the given offset if available. Otherwise it will 
either throw error
    +   * (if failOnDataLoss = true), or return the next available offset 
within [offset, untilOffset).
    +   *
    +   * @param offset the offset to fetch.
    +   * @param untilOffset the max offset to fetch. Exclusive.
    +   * @param pollTimeoutMs timeout in milliseconds to poll data from Kafka.
    +   * @param failOnDataLoss When `failOnDataLoss` is `true`, this method 
will either return record at
    +   *                       offset if available, or throw exception.when 
`failOnDataLoss` is `false`,
    +   *                       this method will either return record at offset 
if available, or return
    +   *                       the next earliest available record less than 
untilOffset, or null. It
    +   *                       will not throw any exception.
        */
    -  def get(offset: Long, pollTimeoutMs: Long): ConsumerRecord[Array[Byte], 
Array[Byte]] = {
    +  def get(
    +      offset: Long,
    +      untilOffset: Long,
    +      pollTimeoutMs: Long,
    +      failOnDataLoss: Boolean): ConsumerRecord[Array[Byte], Array[Byte]] = 
{
    +    require(offset < untilOffset,
    +      s"offset must always be less than untilOffset [offset: $offset, 
untilOffset: $untilOffset]")
         logDebug(s"Get $groupId $topicPartition nextOffset 
$nextOffsetInFetchedData requested $offset")
    -    if (offset != nextOffsetInFetchedData) {
    -      logInfo(s"Initial fetch for $topicPartition $offset")
    -      seek(offset)
    -      poll(pollTimeoutMs)
    +    var toFetchOffset = offset
    +    while (toFetchOffset != UNKNOWN_OFFSET) {
    --- End diff --
    
    add docs for this logic.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to