Github user gaborgsomogyi commented on a diff in the pull request:
https://github.com/apache/spark/pull/20836#discussion_r176656905
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala
---
@@ -279,9 +279,8 @@ private[kafka010] case class InternalKafkaConsumer(
if (record.offset > offset) {
// This may happen when some records aged out but their offsets
already got verified
if (failOnDataLoss) {
- reportDataLoss(true, s"Cannot fetch records in [$offset,
${record.offset})")
--- End diff --
It's just breaks the whole concept. When `failOnDataLoss ` enabled this
exception means it should fail.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]