wuchong commented on a change in pull request #8678: [FLINK-12708][table] 
Introduce new source and sink interfaces to make Blink runner work
URL: https://github.com/apache/flink/pull/8678#discussion_r292295619
 
 

 ##########
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/plan/nodes/physical/batch/BatchExecSink.scala
 ##########
 @@ -77,11 +78,12 @@ class BatchExecSink[T](
   override protected def translateToPlanInternal(
       tableEnv: BatchTableEnvironment): StreamTransformation[Any] = {
     val resultTransformation = sink match {
-      case batchTableSink: BatchTableSink[T] =>
+      case boundedTableSink: StreamTableSink[T] =>
+        // we can insert the bounded DataStream into a StreamTableSink
         val transformation = translateToStreamTransformation(withChangeFlag = 
false, tableEnv)
         val boundedStream = new DataStream(tableEnv.streamEnv, transformation)
-        batchTableSink.emitBoundedStream(
-          boundedStream, tableEnv.getConfig, 
tableEnv.streamEnv.getConfig).getTransformation
+        boundedTableSink.emitDataStream(boundedStream)
 
 Review comment:
   The concern here is do we only support `BoundedTableSink` or also support 
`StreamTableSink`. I think regarding to the blink planner, we can use 
`StreamTableSink` also as a batch table sink, which is similar to 
`BoundedTableSource`. The `StreamTableSink` is very like the first version of 
our unified sink interface.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to