GitHub user sirishaSindri opened a pull request:

    https://github.com/apache/spark/pull/20836

    SPARK-23685 : Fix for the Spark Structured Streaming Kafka 0.10 Consu…

    …mer Can't Handle Non-consecutive Offsets
    
    ## What changes were proposed in this pull request?
         In the fetchData , Instead of throwing an exception on failOnDataLoss, 
I am saying return the record 
    if its offset falls in the user requested offset range
    ## How this patch was tested :
          manually tested, added a unit test and ran  in a real deployment


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/sirishaSindri/spark SPARK-23685

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/20836.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #20836
    
----
commit 5ccfed840f9cf9cd1c28a309b934e1285332d04d
Author: z001k5c <sirisha.sindiri@...>
Date:   2018-03-15T15:53:14Z

    SPARK-23685 : Fix for the Spark Structured Streaming Kafka 0.10 Consumer 
Can't Handle Non-consecutive Offsets

commit 7e08cd9062683c062b7b0408ffe40ff726249909
Author: z001k5c <sirisha.sindiri@...>
Date:   2018-03-15T17:06:06Z

    SPARK-23685 : Fix for the Spark Structured Streaming Kafka 0.10 Consumer 
Can't Handle Non-consecutive Offsets

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to