[
https://issues.apache.org/jira/browse/SPARK-10177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14708968#comment-14708968
]
Cheng Lian commented on SPARK-10177:
------------------------------------
Ah, didn't realize that the {{string_timestamp.gz}} file attached in SPARK-4768
is actuall a tar.gz file containing multiple Parquet files within it. Now I can
decode them.
> Parquet support interprets timestamp values differently from Hive 0.14.0+
> -------------------------------------------------------------------------
>
> Key: SPARK-10177
> URL: https://issues.apache.org/jira/browse/SPARK-10177
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.5.0
> Reporter: Cheng Lian
> Assignee: Cheng Lian
> Priority: Blocker
> Attachments: 000000_0
>
>
> Running the following SQL under Hive 0.14.0+ (tested against 0.14.0 and
> 1.2.1):
> {code:sql}
> CREATE TABLE ts_test STORED AS PARQUET
> AS SELECT CAST("2015-01-01 00:00:00" AS TIMESTAMP);
> {code}
> Then read the Parquet file generated by Hive with Spark SQL:
> {noformat}
> scala>
> sqlContext.read.parquet("hdfs://localhost:9000/user/hive/warehouse_hive14/ts_test").collect()
> res1: Array[org.apache.spark.sql.Row] = Array([2015-01-01 12:00:00.0])
> {noformat}
> This issue can be easily reproduced with [this test case in PR
> #8392|https://github.com/apache/spark/pull/8392/files#diff-1e55698cc579cbae676f827a89c2dc2eR116].
> Spark 1.4.1 works as expected in this case.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]