Thanks Michael for a prompt response! All you said make sense (glad to
have received it from the most trusted source!)
spark.read.format("michael").option("header", true).write("notes.adoc")
:-)
Pozdrawiam,
Jacek Laskowski
https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2.0 http:
There are two type systems in play here. Spark SQL's and Scala's.
>From the Scala side, this is type-safe. After calling as[String]the
Dataset will only return Strings. It is impossible to ever get a class cast
exception unless you do your own incorrect casting after the fact.
Underneath the co
Hi,
The point is that I could go full-type with Dataset[String] and wonder
why it's possible with ints.
You're working with DataFrames which are Dataset[Row]. It's too little
to me these days :)
Pozdrawiam,
Jacek Laskowski
https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2.0 http:
Would not that be as simple as:
scala> (0 to 9).toDF
res14: org.apache.spark.sql.DataFrame = [value: int]
scala> (0 to 9).toDF.map(_.toString)
res13: org.apache.spark.sql.Dataset[String] = [value: string]
with my little knowledge
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profil
Hi,
Just ran into it and can't explain why it works. Please help me understand it.
Q1: Why can I `as[String]` with Ints? Is this type safe?
scala> (0 to 9).toDF("num").as[String]
res12: org.apache.spark.sql.Dataset[String] = [num: int]
Q2: Why can I map over strings even though there are really