[ https://issues.apache.org/jira/browse/SPARK-25906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16671296#comment-16671296 ]
Apache Spark commented on SPARK-25906: -------------------------------------- User 'HyukjinKwon' has created a pull request for this issue: https://github.com/apache/spark/pull/22919 > spark-shell cannot handle `-i` option correctly > ----------------------------------------------- > > Key: SPARK-25906 > URL: https://issues.apache.org/jira/browse/SPARK-25906 > Project: Spark > Issue Type: Bug > Components: Spark Shell, SQL > Affects Versions: 2.4.0 > Reporter: Dongjoon Hyun > Priority: Major > > This is a regression on Spark 2.4.0. > *Spark 2.3.2* > {code:java} > $ cat test.scala > spark.version > case class Record(key: Int, value: String) > spark.sparkContext.parallelize((1 to 2).map(i => Record(i, > s"val_$i"))).toDF.show > $ bin/spark-shell -i test.scala > 18/10/31 23:22:43 WARN NativeCodeLoader: Unable to load native-hadoop library > for your platform... using builtin-java classes where applicable > Setting default log level to "WARN". > To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use > setLogLevel(newLevel). > Spark context Web UI available at http://localhost:4040 > Spark context available as 'sc' (master = local[*], app id = > local-1541053368478). > Spark session available as 'spark'. > Loading test.scala... > res0: String = 2.3.2 > defined class Record > 18/10/31 23:22:56 WARN ObjectStore: Failed to get database global_temp, > returning NoSuchObjectException > +---+-----+ > |key|value| > +---+-----+ > | 1|val_1| > | 2|val_2| > +---+-----+ > {code} > *Spark 2.4.0 RC5* > {code:java} > $ bin/spark-shell -i test.scala > 2018-10-31 23:23:14 WARN NativeCodeLoader:62 - Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > Setting default log level to "WARN". > To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use > setLogLevel(newLevel). > Spark context Web UI available at http://localhost:4040 > Spark context available as 'sc' (master = local[*], app id = > local-1541053400312). > Spark session available as 'spark'. > test.scala:17: error: value toDF is not a member of > org.apache.spark.rdd.RDD[Record] > Error occurred in an application involving default arguments. > spark.sparkContext.parallelize((1 to 2).map(i => Record(i, > s"val_$i"))).toDF.show > {code} > *WORKAROUND* > Add the following line at the first of the script. > {code} > import spark.implicits._ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org