[
https://issues.apache.org/jira/browse/FLINK-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15158704#comment-15158704
]
ASF GitHub Bot commented on FLINK-3226:
---------------------------------------
Github user fhueske commented on a diff in the pull request:
https://github.com/apache/flink/pull/1679#discussion_r53763441
--- Diff:
flink-libraries/flink-table/src/test/scala/org/apache/flink/api/scala/table/test/StringExpressionsITCase.scala
---
@@ -18,42 +18,20 @@
package org.apache.flink.api.scala.table.test
-import org.apache.flink.api.table.{Row, ExpressionException}
import org.apache.flink.api.scala._
import org.apache.flink.api.scala.table._
-import org.apache.flink.test.util.{TestBaseUtils, MultipleProgramsTestBase}
+import org.apache.flink.api.table.Row
import
org.apache.flink.test.util.MultipleProgramsTestBase.TestExecutionMode
+import org.apache.flink.test.util.{MultipleProgramsTestBase, TestBaseUtils}
import org.junit._
import org.junit.runner.RunWith
import org.junit.runners.Parameterized
+
import scala.collection.JavaConverters._
-import org.apache.flink.api.table.codegen.CodeGenException
@RunWith(classOf[Parameterized])
class StringExpressionsITCase(mode: TestExecutionMode) extends
MultipleProgramsTestBase(mode) {
- @Test(expected = classOf[CodeGenException])
--- End diff --
Why did you remove these tests? The `ScalarFunctionsTest` does not test the
feature end-to-end, right?
> Translate optimized logical Table API plans into physical plans representing
> DataSet programs
> ---------------------------------------------------------------------------------------------
>
> Key: FLINK-3226
> URL: https://issues.apache.org/jira/browse/FLINK-3226
> Project: Flink
> Issue Type: Sub-task
> Components: Table API
> Reporter: Fabian Hueske
> Assignee: Chengxiang Li
>
> This issue is about translating an (optimized) logical Table API (see
> FLINK-3225) query plan into a physical plan. The physical plan is a 1-to-1
> representation of the DataSet program that will be executed. This means:
> - Each Flink RelNode refers to exactly one Flink DataSet or DataStream
> operator.
> - All (join and grouping) keys of Flink operators are correctly specified.
> - The expressions which are to be executed in user-code are identified.
> - All fields are referenced with their physical execution-time index.
> - Flink type information is available.
> - Optional: Add physical execution hints for joins
> The translation should be the final part of Calcite's optimization process.
> For this task we need to:
> - implement a set of Flink DataSet RelNodes. Each RelNode corresponds to one
> Flink DataSet operator (Map, Reduce, Join, ...). The RelNodes must hold all
> relevant operator information (keys, user-code expression, strategy hints,
> parallelism).
> - implement rules to translate optimized Calcite RelNodes into Flink
> RelNodes. We start with a straight-forward mapping and later add rules that
> merge several relational operators into a single Flink operator, e.g., merge
> a join followed by a filter. Timo implemented some rules for the first SQL
> implementation which can be used as a starting point.
> - Integrate the translation rules into the Calcite optimization process
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)