[ 
https://issues.apache.org/jira/browse/CALCITE-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15885564#comment-15885564
 ] 

Jesus Camacho Rodriguez commented on CALCITE-1661:
--------------------------------------------------

[~julianhyde], could you take a look at the PR in 
https://github.com/apache/calcite/pull/385 ? This problem is not reproducible 
in Calcite, since all columns coming from Druid metrics are considered either 
DOUBLE or LONG, and if a CAST is on the way, we will not push it to Druid. 
However, when we create our own tables in Druid from Hive, we might have other 
data types (FLOAT, DECIMAL) coming from the Druid table. Thanks

> Recognize aggregation function types as FRACTIONAL instead of DOUBLE
> --------------------------------------------------------------------
>
>                 Key: CALCITE-1661
>                 URL: https://issues.apache.org/jira/browse/CALCITE-1661
>             Project: Calcite
>          Issue Type: Bug
>          Components: druid
>    Affects Versions: 1.12.0
>            Reporter: Jesus Camacho Rodriguez
>            Assignee: Jesus Camacho Rodriguez
>             Fix For: 1.12.0
>
>
> Currently, whether to use fractional or integer aggregations is based on 
> following code (L699 in DruidQuery.java).
> {code}
> final boolean b = aggCall.getType().getSqlTypeName() == SqlTypeName.DOUBLE;
> {code}
> Since Hive might use other fractional types for the aggregation, we might end 
> up using the wrong type of aggregation in Druid. We could extend the check as 
> follows:
> {code}
> final boolean b = 
> SqlTypeName.FRACTIONAL_TYPES.contains(aggCall.getType().getSqlTypeName());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to