sunchao commented on a change in pull request #30706:
URL: https://github.com/apache/spark/pull/30706#discussion_r540413701
##########
File path:
sql/catalyst/src/main/java/org/apache/spark/sql/connector/write/WriteBuilder.java
##########
@@ -35,22 +35,41 @@
public interface WriteBuilder {
/**
- * Returns a {@link BatchWrite} to write data to batch source. By default
this method throws
- * exception, data sources must overwrite this method to provide an
implementation, if the
- * {@link Table} that creates this write returns {@link
TableCapability#BATCH_WRITE} support in
- * its {@link Table#capabilities()}.
+ * Returns a logical {@link Write} shared between batch and streaming.
+ *
+ * @since 3.2.0
*/
+ default Write build() {
+ return new Write() {
+ @Override
+ public BatchWrite toBatch() {
+ return buildForBatch();
+ }
+
+ @Override
+ public StreamingWrite toStreaming() {
+ return buildForStreaming();
+ }
+ };
+ }
+
+ /**
+ * Returns a {@link BatchWrite} to write data to batch source.
+ *
+ * @deprecated use {@link #build()} instead.
+ */
+ @Deprecated
default BatchWrite buildForBatch() {
throw new UnsupportedOperationException(getClass().getName() +
Review comment:
I think you'll also need to change `WriteBuilder.build` to not override
`toBatch` and `toStreaming`? such as
```scala
default Write build() {
return new Write() {
};
}
```
with this, Spark users just need to override `build` method in their DS
implementations and the old APIs `buildForBatch` and `buildForStreaming` will
automatically pick up the logic in the other methods.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]