[ 
https://issues.apache.org/jira/browse/HIVE-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15850:
-------------------------------------------
    Comment: was deleted

(was: GitHub user jcamachor opened a pull request:

    https://github.com/apache/hive/pull/143

    HIVE-15850: Proper handling of timezone in Druid storage handler

    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/jcamachor/hive HIVE-15850

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/hive/pull/143.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #143
    
----
commit 00de5729c4a8bfe8f1adc3410e7510f7286c85b4
Author: Jesus Camacho Rodriguez <[email protected]>
Date:   2017-02-08T13:16:41Z

    HIVE-15850: Proper handling of timezone in Druid storage handler

----
)

> Proper handling of timezone in Druid storage handler
> ----------------------------------------------------
>
>                 Key: HIVE-15850
>                 URL: https://issues.apache.org/jira/browse/HIVE-15850
>             Project: Hive
>          Issue Type: Bug
>          Components: Druid integration
>    Affects Versions: 2.2.0
>            Reporter: Jesus Camacho Rodriguez
>            Assignee: Jesus Camacho Rodriguez
>            Priority: Critical
>
> We need to make sure that filters on timestamp are represented with timezone 
> when we go into Calcite and converting them again when we go back from 
> Calcite to Hive. That would help us to 1) push the correct filters to Druid, 
> and 2) if filters are not pushed at all (they remain in the Calcite plan), 
> they will be correctly represented in Hive. I have checked and AFAIK this is 
> currently done correctly (ASTBuilder.java, ExprNodeConverter.java, and 
> RexNodeConverter.java).
> Secondly, we need to make sure we read/write timestamp data correctly from/to 
> Druid.
> - When we write timestamp to Druid, we should include the timezone, which 
> would allow Druid to handle them properly. We do that already.
> - When we read timestamp from Druid, we should transform the timestamp to be 
> based on Hive timezone. This will give us a consistent behavior of 
> Druid-on-Hive vs Hive-standalone, since timestamp in Hive is represented to 
> the user using Hive client timezone. Currently we do not do that.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to