[ 
https://issues.apache.org/jira/browse/SPARK-17579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15501910#comment-15501910
 ] 

Jianfei Wang commented on SPARK-17579:
--------------------------------------

Yeah,if i change A[T:Encoder] to A[T],it will work both in local mode and 
cluster mode.
but I's miraculous that the code above works in local mode but fails in cluster 
mode. 

> Exception When the Main object extends Encoder in cluster mode but ok in 
> local mode
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-17579
>                 URL: https://issues.apache.org/jira/browse/SPARK-17579
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, SQL
>    Affects Versions: 2.0.0
>            Reporter: Jianfei Wang
>
> this the code below: I got exception in cluster mode, but it's ok in local 
> mode.
> Besides if I remove the extends in Main object it will be ok in cluster mode.
> Why this?
> {code}
> import org.apache.spark.sql._
> object Env {
>   val spark = SparkSession.builder.getOrCreate()
> }
> import Env.spark.implicits._
> abstract class A[T : Encoder] {}
> object Main extends A[String] {
>   def func(str:String):String = str
>   def main(args: Array[String]): Unit = {
>     Env.spark.createDataset(Seq("a","b","c")).map(func).show()
>   }
> }
> {code}
> I got exception below:
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class Main$ 
> at Main$$anonfun$main$1.apply(test.scala:14) 
> at Main$$anonfun$main$1.apply(test.scala:14) 
> at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
>  
> Source) 
> at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>  
> at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
>  
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231)
>  
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
>  
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
>  
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
>  
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) 
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) 
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) 
> at org.apache.spark.scheduler.Task.run(Task.scala:86) 
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:277) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  
> at java.lang.Thread.run(Thread.java:745) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to