[
https://issues.apache.org/jira/browse/HIVE-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on HIVE-15850 started by Jesus Camacho Rodriguez.
------------------------------------------------------
> Proper handling of timezone in Druid storage handler
> ----------------------------------------------------
>
> Key: HIVE-15850
> URL: https://issues.apache.org/jira/browse/HIVE-15850
> Project: Hive
> Issue Type: Bug
> Components: Druid integration
> Affects Versions: 2.2.0
> Reporter: Jesus Camacho Rodriguez
> Assignee: Jesus Camacho Rodriguez
> Priority: Critical
> Attachments: HIVE-15850.patch
>
>
> We need to make sure that filters on timestamp are represented with timezone
> when we go into Calcite and converting them again when we go back from
> Calcite to Hive. That would help us to 1) push the correct filters to Druid,
> and 2) if filters are not pushed at all (they remain in the Calcite plan),
> they will be correctly represented in Hive. I have checked and AFAIK this is
> currently done correctly (ASTBuilder.java, ExprNodeConverter.java, and
> RexNodeConverter.java).
> Secondly, we need to make sure we read/write timestamp data correctly from/to
> Druid.
> - When we write timestamp to Druid, we should include the timezone, which
> would allow Druid to handle them properly. We do that already.
> - When we read timestamp from Druid, we should transform the timestamp to be
> based on Hive timezone. This will give us a consistent behavior of
> Druid-on-Hive vs Hive-standalone, since timestamp in Hive is represented to
> the user using Hive client timezone. Currently we do not do that.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)