[ 
https://issues.apache.org/jira/browse/HIVE-22224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16961541#comment-16961541
 ] 

Brandon Scheller commented on HIVE-22224:
-----------------------------------------

While looking into this, I have found an issue that makes this more complex 
than converting the Long to Timestamp. Because Hive uses an older version of 
avro and parquet that does not yet support logical type, we have no way of 
knowing the precision of the long in relation to timestamp. Spark can write 
both timestamp_millis and timestamp_micros and hive would have to guess at the 
precision of the number to convert to timestamp.

> Support Parquet-Avro Timestamp Type
> -----------------------------------
>
>                 Key: HIVE-22224
>                 URL: https://issues.apache.org/jira/browse/HIVE-22224
>             Project: Hive
>          Issue Type: Bug
>          Components: Database/Schema
>    Affects Versions: 2.3.5, 2.3.6
>            Reporter: cdmikechen
>            Assignee: cdmikechen
>            Priority: Major
>              Labels: parquet
>             Fix For: 2.3.7
>
>
> When user create an external table and import a parquet-avro data with 1.8.2 
> version which supported logical_type in Hive2.3 or before version, Hive can 
> not read timestamp type column data correctly.
> Hive will read it as LongWritable which it actually stores as 
> long(logical_type=timestamp-millis).So we may add some codes in 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableTimestampObjectInspector.java
>  to let Hive cast long type to timestamp type.
> Some code like below:
>  
> public Timestamp getPrimitiveJavaObject(Object o) {
>   if (o instanceof LongWritable) {
>     return new Timestamp(((LongWritable) o).get());
>   }
>   return o == null ? null : ((TimestampWritable) o).getTimestamp();
> }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to