[ 
https://issues.apache.org/jira/browse/FLINK-6516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004309#comment-16004309
 ] 

ASF GitHub Bot commented on FLINK-6516:
---------------------------------------

Github user fhueske commented on a diff in the pull request:

    https://github.com/apache/flink/pull/3860#discussion_r115683417
  
    --- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/logical/FlinkLogicalTableSourceScan.scala
 ---
    @@ -104,6 +105,21 @@ class FlinkLogicalTableSourceScan(
           s"Scan($s)"
         }
       }
    +
    +  override def estimateRowCount(mq: RelMetadataQuery): Double = {
    +    val tableSourceTable = getTable.unwrap(classOf[TableSourceTable[_]])
    +
    +    if (tableSourceTable.getStatistic != FlinkStatistic.UNKNOWN) {
    --- End diff --
    
    Same as above. We somehow need to incorporate the selectivity of a 
pushed-down filter.


> using real row count instead of dummy row count when optimizing plan
> --------------------------------------------------------------------
>
>                 Key: FLINK-6516
>                 URL: https://issues.apache.org/jira/browse/FLINK-6516
>             Project: Flink
>          Issue Type: Improvement
>          Components: Table API & SQL
>            Reporter: godfrey he
>            Assignee: godfrey he
>
> Currently, the statistic of {{TableSourceTable}} is {{UNKNOWN}} mostly, and 
> the statistic from {{ExternalCatalog}} maybe is null also. Actually, only 
> each {{TableSource}} knows its statistic exactly, especial for 
> {{FilterableTableSource}} and {{PartitionableTableSource}}. So we can add 
> {{getTableStats}} method in {{TableSource}}, and use it in TableSourceScan's 
> estimateRowCount method to get real row count.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to