> I get the following stack trace with this issue, anybody has any clue? I
am running spark on yarn in cluster mode.
>
>
>
>
>
> 15/09/16 16:30:28 INFO spark.SparkContext: Starting job: jdbc at
AssetMetadataToVertica.java:114
>
> 15/09/16 16:30:28 INFO scheduler.DAGScheduler: Got job 0 (jdbc at
AssetMetadataToVertica.java:114) with 1 output partitions
>
> 15/09/16 16:30:28 INFO scheduler.DAGScheduler: Final stage: ResultStage
0(jdbc at AssetMetadataToVertica.java:114)
>
> 15/09/16 16:30:28 INFO scheduler.DAGScheduler: Parents of final stage:
List()
>
> 15/09/16 16:30:28 INFO scheduler.DAGScheduler: Missing parents: List()
>
> 15/09/16 16:30:28 INFO scheduler.DAGScheduler: Submitting ResultStage 0
(MapPartitionsRDD[4] at jdbc at AssetMetadataToVertica.java:114), which has
no missing parents
>
> 15/09/16 16:30:28 INFO storage.MemoryStore: ensureFreeSpace(9736) called
with curMem=294749, maxMem=515553361
>
> 15/09/16 16:30:28 INFO storage.MemoryStore: Block broadcast_1 stored as
values in memory (estimated size 9.5 KB, free 491.4 MB)
>
> 15/09/16 16:30:28 INFO storage.MemoryStore: ensureFreeSpace(4926) called
with curMem=304485, maxMem=515553361
>
> 15/09/16 16:30:28 INFO storage.MemoryStore: Block broadcast_1_piece0
stored as bytes in memory (estimated size 4.8 KB, free 491.4 MB)
>
> 15/09/16 16:30:28 INFO storage.BlockManagerInfo: Added broadcast_1_piece0
in memory on 10.140.104.95:39784 (size: 4.8 KB, free: 491.6 MB)
>
> 15/09/16 16:30:28 INFO spark.SparkContext: Created broadcast 1 from
broadcast at DAGScheduler.scala:861
>
> 15/09/16 16:30:28 INFO scheduler.DAGScheduler: Submitting 1 missing tasks
from ResultStage 0 (MapPartitionsRDD[4] at jdbc at
AssetMetadataToVertica.java:114)
>
> 15/09/16 16:30:28 INFO cluster.YarnClusterScheduler: Adding task set 0.0
with 1 tasks
>
> 15/09/16 16:30:28 INFO scheduler.TaskSetManager: Starting task 0.0 in
stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 2249 bytes)
>
> 15/09/16 16:30:29 WARN scheduler.TaskSetManager: Lost task 0.0 in stage
0.0 (TID 0, localhost): java.lang.ClassCastException:
org.apache.spark.scheduler.ResultTask cannot be cast to
org.apache.spark.scheduler.Task
>
>                 at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
>
>                 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
>                 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>                 at java.lang.Thread.run(Thread.java:744)
>
>
>
> 15/09/16 16:30:29 INFO scheduler.TaskSetManager: Starting task 0.1 in
stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 2249 bytes)
>
> 15/09/16 16:30:30 INFO scheduler.TaskSetManager: Lost task 0.1 in stage
0.0 (TID 1) on executor localhost: java.lang.ClassCastException
(org.apache.spark.scheduler.ResultTask cannot be cast to
org.apache.spark.scheduler.Task) [duplicate 1]
>
> 15/09/16 16:30:30 INFO scheduler.TaskSetManager: Starting task 0.2 in
stage 0.0 (TID 2, localhost, PROCESS_LOCAL, 2249 bytes)
>
> 15/09/16 16:30:30 INFO scheduler.TaskSetManager: Lost task 0.2 in stage
0.0 (TID 2) on executor localhost: java.lang.ClassCastException
(org.apache.spark.scheduler.ResultTask cannot be cast to
org.apache.spark.scheduler.Task) [duplicate 2]
>
> 15/09/16 16:30:30 INFO scheduler.TaskSetManager: Starting task 0.3 in
stage 0.0 (TID 3, localhost, PROCESS_LOCAL, 2249 bytes)
>
> 15/09/16 16:30:30 INFO scheduler.TaskSetManager: Lost task 0.3 in stage
0.0 (TID 3) on executor localhost: java.lang.ClassCastException
(org.apache.spark.scheduler.ResultTask cannot be cast to
org.apache.spark.scheduler.Task) [duplicate 3]
>
> 15/09/16 16:30:30 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0
failed 4 times; aborting job
>
> 15/09/16 16:30:30 INFO cluster.YarnClusterScheduler: Removed TaskSet 0.0,
whose tasks have all completed, from pool
>
> 15/09/16 16:30:30 INFO cluster.YarnClusterScheduler: Cancelling stage 0
>
> 15/09/16 16:30:30 INFO scheduler.DAGScheduler: ResultStage 0 (jdbc at
AssetMetadataToVertica.java:114) failed in 1.901 s
>
> 15/09/16 16:30:30 INFO scheduler.DAGScheduler: Job 0 failed: jdbc at
AssetMetadataToVertica.java:114, took 2.058078 s
>
> 15/09/16 16:30:30 ERROR yarn.ApplicationMaster: User class threw
exception: org.apache.spark.SparkException: Job aborted due to stage
failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task
0.3 in stage 0.0 (TID 3, localhost): java.lang.ClassCastException:
org.apache.spark.scheduler.ResultTask cannot be cast to
org.apache.spark.scheduler.Task
>
>                 at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
>
>                 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
>                 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>                 at java.lang.Thread.run(Thread.java:744)
>
>
>
> Driver stacktrace:
>
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage
0.0 (TID 3, localhost): java.lang.ClassCastException:
org.apache.spark.scheduler.ResultTask cannot be cast to
org.apache.spark.scheduler.Task
>
>                 at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
>
>                 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
>                 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>                 at java.lang.Thread.run(Thread.java:744)
>
>
>
> Driver stacktrace:
>
>                 at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
>
>                 at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
>
>                 at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
>
>                 at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>
>                 at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>
>                 at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
>
>                 at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>
>                 at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>
>                 at scala.Option.foreach(Option.scala:236)
>
>                 at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
>
>                 at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
>
>                 at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
>
>                 at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
>
>                 at
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>
>                 at
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
>
>                 at
org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
>
>                 at
org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
>
>                 at
org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
>
>                 at
org.apache.spark.SparkContext.runJob(SparkContext.scala:1910)
>
>                 at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:898)
>
>                 at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:896)
>
>                 at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>
>                 at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
>
>                 at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
>
>                 at
org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:896)
>
>                 at
org.apache.spark.sql.DataFrame$$anonfun$foreachPartition$1.apply$mcV$sp(DataFrame.scala:1370)
>
>                 at
org.apache.spark.sql.DataFrame$$anonfun$foreachPartition$1.apply(DataFrame.scala:1370)
>
>                 at
org.apache.spark.sql.DataFrame$$anonfun$foreachPartition$1.apply(DataFrame.scala:1370)
>
>                 at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
>
>                 at
org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1904)
>
>                 at
org.apache.spark.sql.DataFrame.foreachPartition(DataFrame.scala:1369)
>
>                 at
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.saveTable(JdbcUtils.scala:214)
>
>                 at
org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:281)
>
>                 at
com.hp.cmx.api.reporting.reports.AssetMetadataToVertica.main(AssetMetadataToVertica.java:114)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
>
>                 at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>                 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:606)
>
>                 at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)
>
> Caused by: java.lang.ClassCastException:
org.apache.spark.scheduler.ResultTask cannot be cast to
org.apache.spark.scheduler.Task
>
>                 at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
>
>                 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
>                 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>                 at java.lang.Thread.run(Thread.java:744)
>
> 15/09/16 16:30:30 INFO yarn.ApplicationMaster: Final app status: FAILED,
exitCode: 15, (reason: User class threw exception:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage
0.0 (TID 3, localhost): java.lang.ClassCastException:
org.apache.spark.scheduler.ResultTask cannot be cast to
org.apache.spark.scheduler.Task
>
>                 at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
>
>                 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
>                 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>                 at java.lang.Thread.run(Thread.java:744)
>
>
>
> Driver stacktrace:)
>
> 15/09/16 16:30:30 INFO spark.SparkContext: Invoking stop() from shutdown
hook
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/static/sql,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/SQL/execution/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/SQL/execution,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/SQL/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/SQL,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/metrics/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/api,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/static,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/threadDump,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/environment/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/environment,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/rdd,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/pool/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/pool,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/job/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/job,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/json,null}
>
> 15/09/16 16:30:30 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs,null}
>
> 15/09/16 16:30:30 INFO ui.SparkUI: Stopped Spark web UI at
http://10.140.104.95:40460
>
> 15/09/16 16:30:30 INFO scheduler.DAGScheduler: Stopping DAGScheduler
>
> 15/09/16 16:30:30 INFO cluster.YarnClusterSchedulerBackend: Shutting down
all executors
>
> 15/09/16 16:30:30 INFO cluster.YarnClusterSchedulerBackend: Asking each
executor to shut down
>
> 15/09/16 16:30:30 INFO yarn.ApplicationMaster$AMEndpoint: Driver
terminated or disconnected! Shutting down. localhost:55209
>
> 15/09/16 16:30:30 INFO yarn.ApplicationMaster$AMEndpoint: Driver
terminated or disconnected! Shutting down. localhost:54302
>
> 15/09/16 16:30:30 INFO spark.MapOutputTrackerMasterEndpoint:
MapOutputTrackerMasterEndpoint stopped!
>
> 15/09/16 16:30:30 INFO storage.MemoryStore: MemoryStore cleared
>
> 15/09/16 16:30:30 INFO storage.BlockManager: BlockManager stopped
>
> 15/09/16 16:30:30 INFO storage.BlockManagerMaster: BlockManagerMaster
stopped
>
> 15/09/16 16:30:30 INFO
scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
OutputCommitCoordinator stopped!
>
> 15/09/16 16:30:30 INFO spark.SparkContext: Successfully stopped
SparkContext
>
> 15/09/16 16:30:30 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Shutting down remote daemon.
>
> 15/09/16 16:30:30 INFO yarn.ApplicationMaster: Unregistering
ApplicationMaster with FAILED (diag message: User class threw exception:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage
0.0 (TID 3, localhost): java.lang.ClassCastException:
org.apache.spark.scheduler.ResultTask cannot be cast to
org.apache.spark.scheduler.Task
>
>                 at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
>
>                 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
>                 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
>                 at java.lang.Thread.run(Thread.java:744)
>
>
>
> Driver stacktrace:)
>
> 15/09/16 16:30:30 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Remote daemon shut down; proceeding with flushing remote transports.
>
> 15/09/16 16:30:30 INFO impl.AMRMClientImpl: Waiting for application to be
successfully unregistered.
>
> 15/09/16 16:30:30 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Remoting shut down.
>
> 15/09/16 16:30:30 INFO yarn.ApplicationMaster: Deleting staging directory
.sparkStaging/application_1442420003790_0007
>
> 15/09/16 16:30:30 INFO util.ShutdownHookManager: Shutdown hook called
>
> 15/09/16 16:30:30 INFO util.ShutdownHookManager: Deleting directory
/var/lib/hadoop-yarn/cache/yarn/nm-local-dir/usercache/ravichad/appcache/application_1442420003790_0007/spark-41f7c3b4-79e0-49de-9459-66a4a48f882e
>
> stdout0
>
> data:TFile.index
>
> ________________________________
> none��66data:TFile.meta
> ________________________________
> none���\��h��׶9�A@���P

Reply via email to