Hi Spark devs,
I'm unsure if what I'm seeing is correct. I'd appreciate any input
to...rest my nerves :-) I did `import org.apache.spark._` by mistake,
but since it's valid, I'm wondering why does Spark shell imports sql
at all since it's available after the import?!
(it's today's build)
scala> sql("SELECT * FROM dafa").show(false)
<console>:30: error: reference to sql is ambiguous;
it is imported twice in the same scope by
import org.apache.spark._
and import sqlContext.sql
sql("SELECT * FROM dafa").show(false)
^
scala> :imports
1) import sqlContext.implicits._ (52 terms, 31 are implicit)
2) import sqlContext.sql (1 terms)
scala> sc.version
res19: String = 2.0.0-SNAPSHOT
Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]