[
https://issues.apache.org/jira/browse/SPARK-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14580643#comment-14580643
]
Sean Owen commented on SPARK-8142:
----------------------------------
What do you mean that the 'provided' version of Hadoop will differ? Although
you ideally compile and run against the same version exactly, there is still
only one version at runtime, and it actually should work fine to compile
against, say, plain Spark 1.3 artifacts but deploy on a Spark-1.3-based CDH 5.4
cluster. The Spark APIs are the same.
It does not require a Maven profile, and is applied to Spark and Hadoop
dependencies only.
> Spark Job Fails with ResultTask ClassCastException
> --------------------------------------------------
>
> Key: SPARK-8142
> URL: https://issues.apache.org/jira/browse/SPARK-8142
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.3.1
> Reporter: Dev Lakhani
>
> When running a Spark Job, I get no failures in the application code
> whatsoever but a weird ResultTask Class exception. In my job, I create a RDD
> from HBase and for each partition do a REST call on an API, using a REST
> client. This has worked in IntelliJ but when I deploy to a cluster using
> spark-submit.sh I get :
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in
> stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0
> (TID 3, host): java.lang.ClassCastException:
> org.apache.spark.scheduler.ResultTask cannot be cast to
> org.apache.spark.scheduler.Task
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:185)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> These are the configs I set to override the spark classpath because I want to
> use my own glassfish jersey version:
>
> sparkConf.set("spark.driver.userClassPathFirst","true");
> sparkConf.set("spark.executor.userClassPathFirst","true");
> I see no other warnings or errors in any of the logs.
> Unfortunately I cannot post my code, but please ask me questions that will
> help debug the issue. Using spark 1.3.1 hadoop 2.6.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]