[ https://issues.apache.org/jira/browse/SPARK-17579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15501194#comment-15501194 ]
Sean Owen commented on SPARK-17579: ----------------------------------- Is there more to the error? this doesn't say why it could not initialize the class. I suspect it has to do with your view bounds and implicit resolution. Why would you main object need to inherit from an abstract class? I suspect there's a problem triggered in there. > Exception When the Main object extends Encoder in cluster mode but ok in > local mode > ----------------------------------------------------------------------------------- > > Key: SPARK-17579 > URL: https://issues.apache.org/jira/browse/SPARK-17579 > Project: Spark > Issue Type: Bug > Components: Spark Core, SQL > Affects Versions: 2.0.0 > Reporter: Jianfei Wang > > this the code below: I got exception in cluster mode, but it's ok in local > mode. > Besides if I remove the extends in Main object it will be ok in cluster mode. > Why this? > {code} > import org.apache.spark.sql._ > object Env { > val spark = SparkSession.builder.getOrCreate() > } > import Env.spark.implicits._ > abstract class A[T : Encoder] {} > object Main extends A[String] { > def func(str:String):String = str > def main(args: Array[String]): Unit = { > Env.spark.createDataset(Seq("a","b","c")).map(func).show() > } > } > {code} > I got exception below: > Caused by: java.lang.NoClassDefFoundError: Could not initialize class Main$ > at Main$$anonfun$main$1.apply(test.scala:14) > at Main$$anonfun$main$1.apply(test.scala:14) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370) > > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231) > > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225) > > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803) > > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803) > > at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) > at org.apache.spark.scheduler.Task.run(Task.scala:86) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:277) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > > at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org