[
https://issues.apache.org/jira/browse/SPARK-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14251504#comment-14251504
]
Sun Rui commented on SPARK-2075:
--------------------------------
I met the same issue. I had a post in the Spark user mailing list but it does
not get archived in http://apache-spark-user-list.1001560.n3.nabble.com/, so I
have to describe the issue here:
Steps to reproduce:
1. Download the official pre-built Spark binary 1.1.1 at
http://d3kbcqa49mib13.cloudfront.net/spark-1.1.1-bin-hadoop1.tgz
2. Launch the Spark cluster in pseudo cluster mode
3. A small scala APP which calls RDD.saveAsObjectFile()
scalaVersion := "2.10.4"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.1.1"
)
val sc = new SparkContext(args(0), "test") //args[0] is the Spark master URI
val rdd = sc.parallelize(List(1, 2, 3))
rdd.saveAsObjectFile("/tmp/mysaoftmp")
sc.stop
throws an exception as follows:
[error] (run-main-0) org.apache.spark.SparkException: Job aborted due to stage
failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3
in stage 0.0 (TID 6, ray-desktop.sh.intel.com): java.lang.ClassCastException:
scala.Tuple2 cannot be cast to scala.collection.Iterator
[error] org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
[error] org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
[error]
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
[error] org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
[error] org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
[error] org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
[error] org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
[error] org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
[error]
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
[error] org.apache.spark.scheduler.Task.run(Task.scala:54)
[error]
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
[error]
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
[error]
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[error] java.lang.Thread.run(Thread.java:701)
After investigation, I found that this is caused by bytecode incompatibility
issue between RDD.class in spark-core_2.10-1.1.1.jar and the pre-built spark
assembly respectively.
This issue also happens with spark 1.1.0.
> Anonymous classes are missing from Spark distribution
> -----------------------------------------------------
>
> Key: SPARK-2075
> URL: https://issues.apache.org/jira/browse/SPARK-2075
> Project: Spark
> Issue Type: Bug
> Components: Build, Spark Core
> Affects Versions: 1.0.0
> Reporter: Paul R. Brown
> Priority: Critical
>
> Running a job built against the Maven dep for 1.0.0 and the hadoop1
> distribution produces:
> {code}
> java.lang.ClassNotFoundException:
> org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1
> {code}
> Here's what's in the Maven dep as of 1.0.0:
> {code}
> jar tvf
> ~/.m2/repository/org/apache/spark/spark-core_2.10/1.0.0/spark-core_2.10-1.0.0.jar
> | grep 'rdd/RDD' | grep 'saveAs'
> 1519 Mon May 26 13:57:58 PDT 2014
> org/apache/spark/rdd/RDD$anonfun$saveAsTextFile$1.class
> 1560 Mon May 26 13:57:58 PDT 2014
> org/apache/spark/rdd/RDD$anonfun$saveAsTextFile$2.class
> {code}
> And here's what's in the hadoop1 distribution:
> {code}
> jar tvf spark-assembly-1.0.0-hadoop1.0.4.jar| grep 'rdd/RDD' | grep 'saveAs'
> {code}
> I.e., it's not there. It is in the hadoop2 distribution:
> {code}
> jar tvf spark-assembly-1.0.0-hadoop2.2.0.jar| grep 'rdd/RDD' | grep 'saveAs'
> 1519 Mon May 26 07:29:54 PDT 2014
> org/apache/spark/rdd/RDD$anonfun$saveAsTextFile$1.class
> 1560 Mon May 26 07:29:54 PDT 2014
> org/apache/spark/rdd/RDD$anonfun$saveAsTextFile$2.class
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]