twalthr commented on a change in pull request #16290:
URL: https://github.com/apache/flink/pull/16290#discussion_r663849045



##########
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/Table.java
##########
@@ -1355,6 +1355,28 @@ default Table limit(int offset, int fetch) {
      */
     TableResult executeInsert(String tablePath, boolean overwrite);
 
+    /**
+     * Declares that the pipeline defined by the given {@link Table} object 
should be written to a
+     * table defined by a {@link TableDescriptor}. It executes the insert 
operation.

Review comment:
       side comment for the other `executeInsert` methods as well: explain that 
using the same descriptor will not lead to the same sink, we should recommend a 
statement set instead

##########
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/CalcITCase.scala
##########
@@ -548,4 +548,25 @@ class CalcITCase extends StreamingTestBase {
     TestBaseUtils.compareResultAsText(result, "42")
   }
 
+  @Test
+  def testExecuteInsertToTableDescriptor(): Unit = {

Review comment:
       I think it is fine to use an ITCase here because we also trigger an 
execution but `Calc` is not a good location to find it again, maybe 
`org.apache.flink.table.planner.runtime.stream.table.TableSinkITCase`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to