[ https://issues.apache.org/jira/browse/SPARK-25906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671264#comment-16671264 ]
Hyukjin Kwon edited comment on SPARK-25906 at 11/1/18 8:49 AM: --------------------------------------------------------------- The root cause seems to be [https://github.com/scala/scala/commit/99dad60d984d3f72338f3bad4c4fe905090edd51.] They change what {{-i}} means in that change. {{-i}} option is replaced to {{-I}}. The _newly replaced_ option {{-i}} at Scala 2.11.12 works like {{:paste}} (previously it worked like {{:load}}). but still I wonder why the newly replaced {{-i}} option does not work. Basically here's what's going on: {code} scala> :paste test.scala Pasting file test.scala... test.scala:17: error: value toDF is not a member of org.apache.spark.rdd.RDD[Record] Error occurred in an application involving default arguments. spark.sparkContext.parallelize((1 to 2).map(i => Record(i, s"val_$i"))).toDF.show {code} {{:paste}} itself looks not working fine in both Spark 2.3 and 2.4. FWIW, {{./bin/spark-shell --help}} does not show this option. was (Author: hyukjin.kwon): The root cause seems to be [https://github.com/scala/scala/commit/99dad60d984d3f72338f3bad4c4fe905090edd51.] They change what {{-i}} means in that change. {{-i}} option is replaced to {{-I}}. The _newly replaced_ option {{-i}} at Scala 2.11.12 works like {{:paste}} (previously it worked like {{:load}}). but still I wonder why the newly replaced {{-i}} option does not work. FWIW, {{./bin/spark-shell --help}} does not show this option. > spark-shell cannot handle `-i` option correctly > ----------------------------------------------- > > Key: SPARK-25906 > URL: https://issues.apache.org/jira/browse/SPARK-25906 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.4.0 > Reporter: Dongjoon Hyun > Priority: Major > > This is a regression on Spark 2.4.0. > *Spark 2.3.2* > {code:java} > $ cat test.scala > spark.version > case class Record(key: Int, value: String) > spark.sparkContext.parallelize((1 to 2).map(i => Record(i, > s"val_$i"))).toDF.show > $ bin/spark-shell -i test.scala > 18/10/31 23:22:43 WARN NativeCodeLoader: Unable to load native-hadoop library > for your platform... using builtin-java classes where applicable > Setting default log level to "WARN". > To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use > setLogLevel(newLevel). > Spark context Web UI available at http://localhost:4040 > Spark context available as 'sc' (master = local[*], app id = > local-1541053368478). > Spark session available as 'spark'. > Loading test.scala... > res0: String = 2.3.2 > defined class Record > 18/10/31 23:22:56 WARN ObjectStore: Failed to get database global_temp, > returning NoSuchObjectException > +---+-----+ > |key|value| > +---+-----+ > | 1|val_1| > | 2|val_2| > +---+-----+ > {code} > *Spark 2.4.0 RC5* > {code:java} > $ bin/spark-shell -i test.scala > 2018-10-31 23:23:14 WARN NativeCodeLoader:62 - Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > Setting default log level to "WARN". > To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use > setLogLevel(newLevel). > Spark context Web UI available at http://localhost:4040 > Spark context available as 'sc' (master = local[*], app id = > local-1541053400312). > Spark session available as 'spark'. > test.scala:17: error: value toDF is not a member of > org.apache.spark.rdd.RDD[Record] > Error occurred in an application involving default arguments. > spark.sparkContext.parallelize((1 to 2).map(i => Record(i, > s"val_$i"))).toDF.show > {code} > *WORKAROUND* > Add the following line at the first of the script. > {code} > import spark.implicits._ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org