[
https://issues.apache.org/jira/browse/CALCITE-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16248809#comment-16248809
]
Julian Hyde commented on CALCITE-2042:
--------------------------------------
You are welcome to contribute a pull request for any open JIRA case. There are
some more details here: https://calcite.apache.org/develop/#getting-started
> Compatible types of columns combined via set operators may need explicit
> type-casting conversion
> ------------------------------------------------------------------------------------------------
>
> Key: CALCITE-2042
> URL: https://issues.apache.org/jira/browse/CALCITE-2042
> Project: Calcite
> Issue Type: Bug
> Reporter: Liao Xintao
> Assignee: Julian Hyde
> Priority: Minor
>
> As knowns that columns combined via set operators (such as UNION, EXCEPT and
> INTERSECT) need not have exactly the same data type. So the SQL query is
> likely as below:
> {code:sql}
> select sum(bonus) from (
> (select sal as bonus from emp where empno < 100)
> UNION ALL
> (select sal * 0.5 as bonus from emp where empno >= 100)
> )
> {code}
> In this case, the data types of two "bonus" column are INTEGER and
> DECIMAL(12, 1) respectively, and the UNION operator derives DECIMAL(12, 1) as
> return type. The plan seems so good till I run it on Flink. The Flink
> UnionOperator validates the data types of two inputs and throws an exception
> because the two input types are not exactly the same.
> The underling systems based on calcite, like Flink, may adapt themselves to
> this kind of plan by modifying some codes. In my opinion, however, it may be
> a better way that the plan ensures the rules of data types of results of
> aggregation, (maybe) via rewriting the plan or adding explicit type-casting
> conversion which has been done when converting <case expression>.
> Any feedback or advice is greatly appreciated.
>
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)