[jira] [Commented] (SPARK-19743) Exception when creating more than one implicit Encoder in REPL

2018-01-05 Thread Mark Petruska (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16313018#comment-16313018
 ] 

Mark Petruska commented on SPARK-19743:
---

Could not reproduce the problem on the current master (commit 
6cff7d19f6a905fe425bd6892fe7ca014c0e696b). Chances are that a previous change 
fixed this.

> Exception when creating more than one implicit Encoder in REPL
> --
>
> Key: SPARK-19743
> URL: https://issues.apache.org/jira/browse/SPARK-19743
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Shell
>Affects Versions: 2.0.2, 2.1.1, 2.2.0
>Reporter: Maciej Bryński
>
> During test I wanted to create 2 different bean classes and encoders for 
> them. 
> First time it worked.
> {code}
> scala> class Test(@scala.beans.BeanProperty var xxx: Long) {}
> defined class Test
> scala> import org.apache.spark.sql.Encoders
> import org.apache.spark.sql.Encoders
> scala> implicit val testEncoder = Encoders.bean(classOf[Test])
> testEncoder: org.apache.spark.sql.Encoder[Test] = class[xxx[0]: bigint]
> scala> spark.range(10).map(new Test(_)).show()
> +---+
> |xxx|
> +---+
> |  0|
> |  1|
> |  2|
> |  3|
> |  4|
> |  5|
> |  6|
> |  7|
> |  8|
> |  9|
> +---+
> {code}
> Second try give me exception.
> {code}
> scala> class Test2(@scala.beans.BeanProperty var xxx: Long) {}
> defined class Test2
> scala> implicit val test2Encoder = Encoders.bean(classOf[Test2])
> test2Encoder: org.apache.spark.sql.Encoder[Test2] = class[xxx[0]: bigint]
> scala> spark.range(10).map(new Test2(_)).show()
> 17/02/26 18:10:15 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 2, 
> cdh-data-4.gid): java.lang.ExceptionInInitializerError
> at $line17.$read$$iw.(:9)
> at $line17.$read.(:45)
> at $line17.$read$.(:49)
> at $line17.$read$.()
> at $line19.$read$$iw.(:10)
> at $line19.$read.(:21)
> at $line19.$read$.(:25)
> at $line19.$read$.()
> at 
> $line21.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(:32)
> at 
> $line21.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(:32)
> at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
>  Source)
> at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
> at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
> at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
> at org.apache.spark.scheduler.Task.run(Task.scala:86)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.SparkException: A master URL must be set in your 
> configuration
> at org.apache.spark.SparkContext.(SparkContext.scala:368)
> at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2258)
> at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831)
> at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823)
> at scala.Option.getOrElse(Option.scala:121)
> at 
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
> at org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
> at $line3.$read$$iw$$iw.(:15)
> at $line3.$read$$iw.(:31)
> at $line3.$read.(:33)
> at $line3.$read$.(:37)
> at $line3.$read$.()
> ... 26 more
> 17/02/26 18:10:15 ERROR TaskSetManager: Task 0 in stage 2.0 failed 1 times; 
> aborting job
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 
> (TID 2, cdh-data-4.gid): java.lang.ExceptionInInitializerError
> at (:9)
> at (:45)

[jira] [Commented] (SPARK-19743) Exception when creating more than one implicit Encoder in REPL

2017-02-26 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SPARK-19743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15884826#comment-15884826
 ] 

Maciej Bryński commented on SPARK-19743:


Possible solution could be to change spark val from @transient to @transient 
lazy
https://github.com/apache/spark/blob/master/repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoop.scala#L39

> Exception when creating more than one implicit Encoder in REPL
> --
>
> Key: SPARK-19743
> URL: https://issues.apache.org/jira/browse/SPARK-19743
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.2
>Reporter: Maciej Bryński
>
> During test I wanted to create 2 different bean classes and encoders for 
> them. 
> First time it worked.
> {code}
> scala> class Test(@scala.beans.BeanProperty var xxx: Long) {}
> defined class Test
> scala> import org.apache.spark.sql.Encoders
> import org.apache.spark.sql.Encoders
> scala> implicit val testEncoder = Encoders.bean(classOf[Test])
> testEncoder: org.apache.spark.sql.Encoder[Test] = class[xxx[0]: bigint]
> scala> spark.range(10).map(new Test(_)).show()
> +---+
> |xxx|
> +---+
> |  0|
> |  1|
> |  2|
> |  3|
> |  4|
> |  5|
> |  6|
> |  7|
> |  8|
> |  9|
> +---+
> {code}
> Second try give me exception.
> {code}
> scala> class Test2(@scala.beans.BeanProperty var xxx: Long) {}
> defined class Test2
> scala> implicit val test2Encoder = Encoders.bean(classOf[Test2])
> test2Encoder: org.apache.spark.sql.Encoder[Test2] = class[xxx[0]: bigint]
> scala> spark.range(10).map(new Test2(_)).show()
> 17/02/26 18:10:15 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 2, 
> cdh-data-4.gid): java.lang.ExceptionInInitializerError
> at $line17.$read$$iw.(:9)
> at $line17.$read.(:45)
> at $line17.$read$.(:49)
> at $line17.$read$.()
> at $line19.$read$$iw.(:10)
> at $line19.$read.(:21)
> at $line19.$read$.(:25)
> at $line19.$read$.()
> at 
> $line21.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(:32)
> at 
> $line21.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$1.apply(:32)
> at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
>  Source)
> at 
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
> at 
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
> at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
> at org.apache.spark.scheduler.Task.run(Task.scala:86)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.SparkException: A master URL must be set in your 
> configuration
> at org.apache.spark.SparkContext.(SparkContext.scala:368)
> at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2258)
> at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831)
> at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823)
> at scala.Option.getOrElse(Option.scala:121)
> at 
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
> at org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
> at $line3.$read$$iw$$iw.(:15)
> at $line3.$read$$iw.(:31)
> at $line3.$read.(:33)
> at $line3.$read$.(:37)
> at $line3.$read$.()
> ... 26 more
> 17/02/26 18:10:15 ERROR TaskSetManager: Task 0 in stage 2.0 failed 1 times; 
> aborting job
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
> stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 
> (TID 2, cdh-data-4.gid): java.lang.ExceptionInInitializerError
>