linehrr edited a comment on issue #23324: [SPARK-26267][SS]Retry when detecting 
incorrect offsets from Kafka
URL: https://github.com/apache/spark/pull/23324#issuecomment-452172192
 
 
   @zsxwing not too sure if this is actually a bug related to this fix, but I 
can share my stacktrace here: 
   
   ```
   19/01/07 22:56:15 ERROR streaming.MicroBatchExecution: Query [id = 
c46c67ee-3514-4788-8370-a696837b21b1, runId = 
bb52783e-33d6-460b-9aa2-cc5da414531e] terminated with error
   org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 8.0 failed 4 times, most recent failure: Lost task 0.3 in stage 8.0 (TID 
164, qa2-hdp-3.acuityads.org, executor 2): java.lang.AssertionError: assertion f
   ailed: latest offset -9223372036854775808 does not equal -1
           at scala.Predef$.assert(Predef.scala:170)
           at 
org.apache.spark.sql.kafka010.KafkaMicroBatchInputPartitionReader.resolveRange(KafkaMicroBatchReader.scala:371)
           at 
org.apache.spark.sql.kafka010.KafkaMicroBatchInputPartitionReader.<init>(KafkaMicroBatchReader.scala:329)
           at 
org.apache.spark.sql.kafka010.KafkaMicroBatchInputPartition.createPartitionReader(KafkaMicroBatchReader.scala:314)
           at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD.compute(DataSourceRDD.scala:42)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
           at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
           at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
           at org.apache.spark.scheduler.Task.run(Task.scala:121)
           at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
           at java.lang.Thread.run(Thread.java:745)
   
   ```
   for some reason, looks like fetchLatestOffset returned a Long.MIN_VALUE for 
one of the partitions. I checked the structured streaming checkpoint, that was 
correct, it's the currentAvailableOffset was set to Long.MIN_VALUE. 
   
   kafka broker version: 1.1.0. 
   lib we used: 
   ```// 
https://mvnrepository.com/artifact/org.apache.spark/spark-sql-kafka-0-10
   libraryDependencies += "org.apache.spark" %% "spark-sql-kafka-0-10" % "2.4.0"
   ```
   
   how to reproduce:
   basically we started a structured streamer and subscribed a topic of 4 
partitions. then produced some messages into topic, job crashed and logged the 
stacktrace like above. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to