keypointt commented on a change in pull request #25354: [SPARK-28612][SQL] Add 
DataFrameWriterV2 API
URL: https://github.com/apache/spark/pull/25354#discussion_r316006273
 
 

 ##########
 File path: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
 ##########
 @@ -3178,6 +3178,34 @@ class Dataset[T] private[sql](
     new DataFrameWriter[T](this)
   }
 
+  /**
+   * Create a write configuration builder for v2 sources.
+   *
+   * This builder is used to configure and execute write operations. For 
example, to append to an
+   * existing table, run:
+   *
+   * {{{
+   *   df.writeTo("catalog.db.table").append()
+   * }}}
+   *
+   * This can also be used to create or replace existing tables:
+   *
+   * {{{
+   *   df.writeTo("catalog.db.table").partitionedBy($"col").createOrReplace()
+   * }}}
+   *
+   * @group basic
+   * @since 3.0.0
+   */
+  def writeTo(table: String): DataFrameWriterV2[T] = {
 
 Review comment:
   I find there is `write` for v1
   
   ```
     /**
      * Interface for saving the content of the non-streaming Dataset out into 
external storage.
      *
      * @group basic
      * @since 1.6.0
      */
     def write: DataFrameWriter[T] = {
       if (isStreaming) {
         logicalPlan.failAnalysis(
           "'write' can not be called on streaming Dataset/DataFrame")
       }
       new DataFrameWriter[T](this)
     }
   ```
   why not name it `writeV2` to be self-explaining? or overload `write` from 
different return type `DataFrameWriterV2[T] `?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to