Would not that be as simple as:

scala> (0 to 9).toDF
res14: org.apache.spark.sql.DataFrame = [value: int]

scala> (0 to 9).toDF.map(_.toString)
res13: org.apache.spark.sql.Dataset[String] = [value: string]

with my little knowledge

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 13 August 2016 at 21:17, Jacek Laskowski <ja...@japila.pl> wrote:

> Hi,
>
> Just ran into it and can't explain why it works. Please help me understand
> it.
>
> Q1: Why can I `as[String]` with Ints? Is this type safe?
>
> scala> (0 to 9).toDF("num").as[String]
> res12: org.apache.spark.sql.Dataset[String] = [num: int]
>
> Q2: Why can I map over strings even though there are really ints?
>
> scala> (0 to 9).toDF("num").as[String].map(_.toUpperCase)
> res11: org.apache.spark.sql.Dataset[String] = [value: string]
>
> Why are the two lines possible?
>
> Pozdrawiam,
> Jacek Laskowski
> ----
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
> Follow me at https://twitter.com/jaceklaskowski
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to