[ 
https://issues.apache.org/jira/browse/HIVE-3844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13541036#comment-13541036
 ] 

Mark Grover commented on HIVE-3844:
-----------------------------------

Thanks Ashutosh. HIVE-3454 deals with specifying milliseconds for casting 
numerical unix timestamps to Timestamp type which it makes HIVE-3822 a 
duplicate of it. I will mark it as so. However, this issue still prevails 
whereby a file containing unix timestamps (regardless of whether they are in 
seconds or milliseconds) on HDFS can't be interpreted as a timestamp column in 
a Hive table. The workaround is to read it as a bigint and use cast ( my_col as 
timestamp) in the queries.
                
> Unix timestamps don't seem to be read correctly from HDFS as Timestamp column
> -----------------------------------------------------------------------------
>
>                 Key: HIVE-3844
>                 URL: https://issues.apache.org/jira/browse/HIVE-3844
>             Project: Hive
>          Issue Type: Bug
>          Components: Serializers/Deserializers
>    Affects Versions: 0.8.0
>            Reporter: Mark Grover
>            Assignee: Mark Grover
>
> Serega Shepak pointed out that something like
> {code}
> select cast(date_occurrence as timestamp) from xvlr_data limit 10
> {code}
> where  date_occurrence has BIGINT type (timestamp in milliseconds) works. But 
> it doesn't work if the declared type is TIMESTAMP on column. The data in the 
> date_occurence column in unix timestamp in millis.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to