You can invoke exactly the same functions on scala side as well i.e.
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.functions$

Have you tried them?

On Thu, Mar 24, 2016 at 10:29 PM, Mich Talebzadeh <mich.talebza...@gmail.com
> wrote:

>
> Hi,
>
> Read a CSV in with the following schema
>
> scala> df.printSchema
> root
>  |-- Invoice Number: string (nullable = true)
>  |-- Payment date: string (nullable = true)
>  |-- Net: string (nullable = true)
>  |-- VAT: string (nullable = true)
>  |-- Total: string (nullable = true)
>
> I use mapping as below
>
> case class Invoices(Invoicenumber: String, Paymentdate: String, Net:
> Double, VAT: Double, Total: Double)
>
> val a = df.filter(col("Total") > "").map(p => Invoices(p(0).toString,
> p(1).toString, p(2).toString.substring(1).replace(",", "").toDouble,
> p(3).toString.substring(1).replace(",", "").toDouble,
> p(4).toString.substring(1).replace(",", "").toDouble))
>
>
> I want to convert p(1).toString to datetime like below when I used in sql
>
> TO_DATE(FROM_UNIXTIME(UNIX_TIMESTAMP(paymentdate,'dd/MM/yyyy'),'yyyy-MM-dd'))
> AS paymentdate
>
>
> Thanks
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>



-- 
Regards,
Alexander

Reply via email to