This is a bug report on an internal Spark problem. This exception results
in the death of the task in which it arises, but somehow Spark seems to
recover and soldier on.
Here is the exception, which is completely outside of any user level code:
java.lang.NullPointerException
2013-10-29 14:38:20 ERROR JobManager - Running streaming job 2425 @
1383079100000 ms failed
spark.SparkException: Job failed: ShuffleMapTask(3846, 1) failed:
ExceptionFailure(java.lang.NullPointerException,java.lang.NullPointerException,[Ljava.lang.StackTraceElement;@4d5c3086
)
at
spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:642)
at
spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:640)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)
at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:640)
at
spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:601)
at spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:300)
at
spark.scheduler.DAGScheduler.spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:364)
at spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:107)
2013-10-29 14:38:20 ERROR LocalScheduler - Exception in task 1
Thanks,
Craig Vanderborgh