[ 
https://issues.apache.org/jira/browse/FLINK-3225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123314#comment-15123314
 ] 

ASF GitHub Bot commented on FLINK-3225:
---------------------------------------

Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1559#discussion_r51245443
  
    --- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/java/table/JavaBatchTranslator.scala
 ---
    @@ -41,21 +44,13 @@ class JavaBatchTranslator extends PlanTranslator {
     
         // create table representation from DataSet
         val dataSetTable = new DataSetTable[A](
    -    repr.asInstanceOf[JavaDataSet[A]],
    -    fieldNames
    +      repr.asInstanceOf[JavaDataSet[A]],
    +      fieldNames
         )
    -
    -    // register table in Cascading schema
    -    val schema = Frameworks.createRootSchema(true)
         val tableName = repr.hashCode().toString
    -    schema.add(tableName, dataSetTable)
     
    -    // initialize RelBuilder
    -    val frameworkConfig = Frameworks
    -      .newConfigBuilder
    -      .defaultSchema(schema)
    -      .build
    -    val relBuilder = RelBuilder.create(frameworkConfig)
    +    TranslationContext.addDataSet(tableName, dataSetTable)
    --- End diff --
    
    I think we need to hold the table catalog and the RelBuilder in a 
singleton. The Scala Table API does not require a TableEnvironment (and I think 
this should stay like this for seamless integration), so there is no 
othercentral place to register tables (and the RelBuilder must be the same for 
all RelNodes of a query). You are right wrt. to the hashcode. I'll use an 
AtomicCounter for the name.


> Optimize logical Table API plans in Calcite
> -------------------------------------------
>
>                 Key: FLINK-3225
>                 URL: https://issues.apache.org/jira/browse/FLINK-3225
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table API
>            Reporter: Fabian Hueske
>            Assignee: Fabian Hueske
>
> This task implements the optimization of logical Table API plans with Apache 
> Calcite. The input of the optimization process is a logical query plan 
> consisting of Calcite RelNodes. FLINK-3223 translates Table API queries into 
> this representation.
> The result of this issue is an optimized logical plan.
> Calcite's rule-based optimizer applies query rewriting and optimization 
> rules. For Batch SQL, we can use (a subset of) Calcite’s default optimization 
> rules. 
> For this issue we have to 
> - add the Calcite optimizer to the translation process
> - select an appropriate set of batch optimization rules from Calcite’s 
> default rules. We can reuse the rules selected by Timo’s first SQL 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to