Hello Community Users,
I am able to resolve the issue .
The issue was input data format ,By default Excel writes the data in
2001/01/09 whereas Spark Sql takes 2001-01-09 format.
Here is the sample code below
SQL context available as sqlContext.
scala> import org.apache.spark.sql.hive.HiveConte
I see, as far as I know Spark CSV datasource does not support custom date
format but formal ones such as “2015-08-20 15:57:00”.
Internally this uses Timestamp.valueOf() and Date.valueOf() to parse them.
For me, it looks you can
1.
modify and build the library by yourself for custom date
Hi Divya,
Are you using or have you tried Spark CSV datasource
https://github.com/databricks/spark-csv ?
Thanks!
2015-12-28 18:42 GMT+09:00 Divya Gehlot :
> Hi,
> I have input data set which is CSV file where I have date columns.
> My output will also be CSV file and will using this output CSV