Github user sirishaSindri commented on the issue:
https://github.com/apache/spark/pull/20836
Originally this pr was created as "failOnDataLoss" doesn't have any impact
when set in structured streaming. But found out that ,the variable that needs
to be used is "
Github user sirishaSindri closed the pull request at:
https://github.com/apache/spark/pull/20836
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user sirishaSindri commented on a diff in the pull request:
https://github.com/apache/spark/pull/20836#discussion_r177276642
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala
---
@@ -279,9 +279,8 @@ private[kafka010
GitHub user sirishaSindri opened a pull request:
https://github.com/apache/spark/pull/20836
SPARK-23685 : Fix for the Spark Structured Streaming Kafka 0.10 Consuâ¦
â¦mer Can't Handle Non-consecutive Offsets
## What changes were proposed in this pull request?