aokolnychyi commented on a change in pull request #29066:
URL: https://github.com/apache/spark/pull/29066#discussion_r537557913



##########
File path: 
sql/catalyst/src/main/java/org/apache/spark/sql/connector/write/WriteBuilder.java
##########
@@ -23,17 +23,34 @@
 import org.apache.spark.sql.connector.write.streaming.StreamingWrite;
 
 /**
- * An interface for building the {@link BatchWrite}. Implementations can mix 
in some interfaces to
+ * An interface for building the {@link Write}. Implementations can mix in 
some interfaces to
  * support different ways to write data to data sources.
  *
- * Unless modified by a mixin interface, the {@link BatchWrite} configured by 
this builder is to
+ * Unless modified by a mixin interface, the {@link Write} configured by this 
builder is to
  * append data without affecting existing data.
  *
  * @since 3.0.0
  */
 @Evolving
 public interface WriteBuilder {
 
+  /**
+   * Returns a logical {@link Write} shared between batch and streaming.
+   */
+  default Write build() {

Review comment:
       I am not sure I understood. Could you elaborate a bit more, @sunchao?
   
   Spark will now always call `build()` and work with the `Write` abstraction. 
I added the default implementation so that  existing data sources that already 
implement the current API will continue to work as before. Spark will is not 
supposed to call `buildForBatch` after this change.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to