[ 
https://issues.apache.org/jira/browse/CALCITE-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15887372#comment-15887372
 ] 

Ashutosh Chauhan commented on CALCITE-1661:
-------------------------------------------

I agree if column is declared as Decimal in hive, user would expect 
calculations to be precise and doing it in double would surprise users.
However, I think it is worth considering to add a mode accept.approx.resulset 
(false by default) in Hive/Calcite to allow this. We can use this mode 
elsewhere also e.g., to use topN queries which also returns approx resultset. 
In my experience world of big data, atleast some users are willing to trade 
speed for accuracy. 

> Recognize aggregation function types as FRACTIONAL instead of DOUBLE
> --------------------------------------------------------------------
>
>                 Key: CALCITE-1661
>                 URL: https://issues.apache.org/jira/browse/CALCITE-1661
>             Project: Calcite
>          Issue Type: Bug
>          Components: druid
>    Affects Versions: 1.12.0
>            Reporter: Jesus Camacho Rodriguez
>            Assignee: Jesus Camacho Rodriguez
>             Fix For: 1.12.0
>
>
> Currently, whether to use fractional or integer aggregations is based on 
> following code (L699 in DruidQuery.java).
> {code}
> final boolean b = aggCall.getType().getSqlTypeName() == SqlTypeName.DOUBLE;
> {code}
> Since Hive might use other fractional types for the aggregation, we might end 
> up using the wrong type of aggregation in Druid. We could extend the check as 
> follows:
> {code}
> final boolean b = 
> SqlTypeName.FRACTIONAL_TYPES.contains(aggCall.getType().getSqlTypeName());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to