Jianfei Wang created SPARK-17579:
------------------------------------

             Summary: Exception When the Main object extends Encoder in cluster 
mode but ok in local mode
                 Key: SPARK-17579
                 URL: https://issues.apache.org/jira/browse/SPARK-17579
             Project: Spark
          Issue Type: Bug
          Components: Spark Core, SQL
    Affects Versions: 2.0.0
            Reporter: Jianfei Wang


this the code below: I got exception in cluster mode, but it's ok in local mode.
Besides if I remove the extends in Main object it will be ok in cluster mode.
Why this?
{code}
import org.apache.spark.sql._

object Env {
  val spark = SparkSession.builder.getOrCreate()
}

import Env.spark.implicits._

abstract class A[T : Encoder] {}

object Main extends A[String] {
  def func(str:String):String = str
  def main(args: Array[String]): Unit = {
    Env.spark.createDataset(Seq("a","b","c")).map(func).show()
  }
}
{code}
I got exception below:
Caused by: java.lang.NoClassDefFoundError: Could not initialize class Main$ 
at Main$$anonfun$main$1.apply(test.scala:14) 
at Main$$anonfun$main$1.apply(test.scala:14) 
at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
 
Source) 
at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
 
at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
 
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231) 
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225) 
at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
 
at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
 
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) 
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) 
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 
at org.apache.spark.scheduler.Task.run(Task.scala:86) 
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:277) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745) 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to