[jira] [Commented] (SPARK-24284) java.util.NoSuchElementException in Spark Streaming with Kafka
[ https://issues.apache.org/jira/browse/SPARK-24284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16760769#comment-16760769 ] Gabor Somogyi commented on SPARK-24284: --- [~ujjalsatpa...@gmail.com] On 1.6.3 CachedKafkaConsumer doesn't exist so the stacktrace doesn't match with the code. So what is the version exactly? A little more info would be beneficial, like driver + executor logs etc. > java.util.NoSuchElementException in Spark Streaming with Kafka > -- > > Key: SPARK-24284 > URL: https://issues.apache.org/jira/browse/SPARK-24284 > Project: Spark > Issue Type: Bug > Components: DStreams >Affects Versions: 1.6.3 > Environment: Spark Streaming 1.6.3 > >Reporter: Ujjal Satpathy >Priority: Major > > Hi, > I am getting below error while running Spark streaming with Kafka. Though the > issue is not consistent but causing many of the batches failed. > Job aborted due to stage failure: Task 85 in stage 5914.0 failed 4 times, > most recent failure: Lost task 85.3 in stage 5914.0 : > java.util.NoSuchElementException > at java.util.ArrayDeque.getLast(ArrayDeque.java:328) > at > org.apache.kafka.common.record.MemoryRecords$RecordsIterator.(MemoryRecords.java:275) > at > org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(MemoryRecords.java:318) > at > org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(MemoryRecords.java:223) > at > org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79) > at > org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45) > at > org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(Fetcher.java:545) > at > org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:354) > at > org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1000) > at > org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938) > at > org.apache.spark.streaming.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:98) > at > org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:72) > at > org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:227) > at > org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:193) > at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > at > org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply$mcV$sp(PairRDDFunctions.scala:1196) > at > org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1195) > at > org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1195) > at > org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1277) > at > org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1203) > at > org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1183) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) > at org.apache.spark.scheduler.Task.run(Task.scala:89) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Driver stacktrace: -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-24284) java.util.NoSuchElementException in Spark Streaming with Kafka
[ https://issues.apache.org/jira/browse/SPARK-24284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16760775#comment-16760775 ] Gabor Somogyi commented on SPARK-24284: --- This code part has been rewritten in 2.4. Is it possible to retest with it? > java.util.NoSuchElementException in Spark Streaming with Kafka > -- > > Key: SPARK-24284 > URL: https://issues.apache.org/jira/browse/SPARK-24284 > Project: Spark > Issue Type: Bug > Components: DStreams >Affects Versions: 1.6.3 > Environment: Spark Streaming 1.6.3 > >Reporter: Ujjal Satpathy >Priority: Major > > Hi, > I am getting below error while running Spark streaming with Kafka. Though the > issue is not consistent but causing many of the batches failed. > Job aborted due to stage failure: Task 85 in stage 5914.0 failed 4 times, > most recent failure: Lost task 85.3 in stage 5914.0 : > java.util.NoSuchElementException > at java.util.ArrayDeque.getLast(ArrayDeque.java:328) > at > org.apache.kafka.common.record.MemoryRecords$RecordsIterator.(MemoryRecords.java:275) > at > org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(MemoryRecords.java:318) > at > org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(MemoryRecords.java:223) > at > org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79) > at > org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45) > at > org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(Fetcher.java:545) > at > org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:354) > at > org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1000) > at > org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938) > at > org.apache.spark.streaming.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:98) > at > org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:72) > at > org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:227) > at > org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:193) > at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > at > org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply$mcV$sp(PairRDDFunctions.scala:1196) > at > org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1195) > at > org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1195) > at > org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1277) > at > org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1203) > at > org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1183) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) > at org.apache.spark.scheduler.Task.run(Task.scala:89) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Driver stacktrace: -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org