[
https://issues.apache.org/jira/browse/CALCITE-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16847187#comment-16847187
]
Feng Zhu commented on CALCITE-3077:
-----------------------------------
[~hyuan], migration is one of scenarios we confronted in our platform.
Current now, we are building a system based on Calcite to analyze data that
residents in different data centers.
After optimization phase, we generate several SQLs. Each
JdbcToEnumerableConverter node will be generated as a SQL and registered as a
JDBC View in SparkSQL. The final SQL will be executed in Spark. We have ever
tried SparkHandler in Calcite and found it is almost unavailable. SQL API is
more general than low-level RDD/DF/DS APIs that evolves rapidly.
Therefore, RelToSqlConverter is critical for us:).
> Rewrite CUBE&ROLLUP&CUBE queries in SparkSqlDialect
> ---------------------------------------------------
>
> Key: CALCITE-3077
> URL: https://issues.apache.org/jira/browse/CALCITE-3077
> Project: Calcite
> Issue Type: Bug
> Components: core
> Affects Versions: 1.20.0
> Reporter: Feng Zhu
> Assignee: Feng Zhu
> Priority: Major
>
> *Background:* we are building a platform that adopts Calcite to process
> (i.e., parse&validate&convert&optimize) SQL queries and then regenerate the
> final SQL. For the purpose of handling large volume data, we use the popular
> SparkSQL engine to execute the generated SQL query.
> However, we found a great part of real-world test cases failed, due to syntax
> differences of
> *_CUBE/ROLLUP/GROUPING SETS_* clauses. Spark SQL dialect supports only "WITH
> ROLLUP&CUBE" in the "GROUP BY" clause. The corresponding grammer [1] is
> defined as below.
> {code:java}
> aggregation
> : GROUP BY groupingExpressions+=expression (','
> groupingExpressions+=expression)* (
> WITH kind=ROLLUP
> | WITH kind=CUBE
> | kind=GROUPING SETS '(' groupingSet (',' groupingSet)* ')')?
> | GROUP BY kind=GROUPING SETS '(' groupingSet (',' groupingSet)* ')'
> ;
> {code}
> To fill this gap, I think we need to rewrite CUBE/ROLLUP/GROUPING SETS
> clauses in SparkSqlDialect, especially for some complex cases.
> {code:java}
> group by cube ((a, b), (c, d))
> group by cube(a,b), cube(c,d)
> {code}
> [1]https://github.com/apache/spark/blob/master/sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)