cloud-fan commented on a change in pull request #29767:
URL: https://github.com/apache/spark/pull/29767#discussion_r500275109



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
##########
@@ -457,6 +470,17 @@ final class DataStreamWriter[T] private[sql](ds: 
Dataset[T]) {
     foreachBatch((batchDs: Dataset[T], batchId: Long) => 
function.call(batchDs, batchId))
   }
 
+  /**
+   * Specifies the underlying output table.
+   *
+   * @since 3.1.0
+   */
+  def table(tableName: String): DataStreamWriter[T] = {

Review comment:
       I think we will have `DataStreamWriterV2` eventually (after we figure 
out how to design output mode). For now, it's more important to keep API 
consistency between batch and stream.
   
   I don't have a strong opinion about the naming, `table` is fine. cc 
@xuanyuanking @zsxwing 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to