[
https://issues.apache.org/jira/browse/SPARK-30706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Waldemar updated SPARK-30706:
-----------------------------
Description:
If I read string date from CSV file, then cast to date type and write into CSV
file again on the west of Greenwich to csv file again it writes date of one day
ago. This way making this operation in loop we can get unwillingly past date.
If the spark-shell work on the east of Greenwich all is OK.
When writing to parquet also is OK.
Example of code:
{code:java}
//
val test_5_load = "hdfs://192.168.44.161:8020/db/wbiernacki/test_5_load.csv"
val test_5_save = "hdfs://192.168.44.161:8020/db/wbiernacki/test_5_save.csv"
val test_5 = spark.read.format("csv")
.option("header","true")
.load( test_5_load )
.withColumn("begin",to_date(col("begin" ),"yyyy-MM-dd"))
.withColumn("end" ,to_date(col("end" ),"yyyy-MM-dd"))
test_5.show()
test_5
.write.mode("overwrite")
.format("csv")
.option("header","true")
.save( test_5_save )
{code}
Please perform this few times.. The test_5_load.csv file looks like:
{code:java}
//
+--------+----------+----------+----+
| patient| begin| end| new|
+--------+----------+----------+----+
|waldemar|2015-09-22|2015-09-23|old1|
+--------+----------+----------+----+{code}
was:
If I read string date from CSV file, then cast to date type and write into CSV
file again on the west of Greenwich to csv file again it writes date of one day
ago. This way making this operation in loop we can get unwillingly past date.
If the spark-shell work on the east of Greenwich all is OK.
Example of code:
{code:java}
//
val test_5_load = "hdfs://192.168.44.161:8020/db/wbiernacki/test_5_load.csv"
val test_5_save = "hdfs://192.168.44.161:8020/db/wbiernacki/test_5_save.csv"
val test_5 = spark.read.format("csv")
.option("header","true")
.load( test_5_load )
.withColumn("begin",to_date(col("begin" ),"yyyy-MM-dd"))
.withColumn("end" ,to_date(col("end" ),"yyyy-MM-dd"))
test_5.show()
test_5
.write.mode("overwrite")
.format("csv")
.option("header","true")
.save( test_5_save )
{code}
Please perform this few times.. The test_5_load.csv file looks like:
{code:java}
//
+--------+----------+----------+----+
| patient| begin| end| new|
+--------+----------+----------+----+
|waldemar|2015-09-22|2015-09-23|old1|
+--------+----------+----------+----+{code}
> TimeZone in writing pure date type in CSV output
> ------------------------------------------------
>
> Key: SPARK-30706
> URL: https://issues.apache.org/jira/browse/SPARK-30706
> Project: Spark
> Issue Type: Bug
> Components: Spark Shell
> Affects Versions: 2.4.3
> Reporter: Waldemar
> Priority: Minor
>
> If I read string date from CSV file, then cast to date type and write into
> CSV file again on the west of Greenwich to csv file again it writes date of
> one day ago. This way making this operation in loop we can get unwillingly
> past date.
> If the spark-shell work on the east of Greenwich all is OK.
> When writing to parquet also is OK.
> Example of code:
> {code:java}
> //
> val test_5_load = "hdfs://192.168.44.161:8020/db/wbiernacki/test_5_load.csv"
> val test_5_save = "hdfs://192.168.44.161:8020/db/wbiernacki/test_5_save.csv"
> val test_5 = spark.read.format("csv")
> .option("header","true")
> .load( test_5_load )
> .withColumn("begin",to_date(col("begin" ),"yyyy-MM-dd"))
> .withColumn("end" ,to_date(col("end" ),"yyyy-MM-dd"))
> test_5.show()
> test_5
> .write.mode("overwrite")
> .format("csv")
> .option("header","true")
> .save( test_5_save )
> {code}
> Please perform this few times.. The test_5_load.csv file looks like:
> {code:java}
> //
> +--------+----------+----------+----+
> | patient| begin| end| new|
> +--------+----------+----------+----+
> |waldemar|2015-09-22|2015-09-23|old1|
> +--------+----------+----------+----+{code}
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]