aokolnychyi commented on a change in pull request #30706:
URL: https://github.com/apache/spark/pull/30706#discussion_r540109636
##########
File path:
sql/catalyst/src/main/java/org/apache/spark/sql/connector/write/WriteBuilder.java
##########
@@ -35,22 +35,41 @@
public interface WriteBuilder {
/**
- * Returns a {@link BatchWrite} to write data to batch source. By default
this method throws
- * exception, data sources must overwrite this method to provide an
implementation, if the
- * {@link Table} that creates this write returns {@link
TableCapability#BATCH_WRITE} support in
- * its {@link Table#capabilities()}.
+ * Returns a logical {@link Write} shared between batch and streaming.
+ *
+ * @since 3.2.0
*/
+ default Write build() {
+ return new Write() {
+ @Override
+ public BatchWrite toBatch() {
+ return buildForBatch();
+ }
+
+ @Override
+ public StreamingWrite toStreaming() {
+ return buildForStreaming();
+ }
+ };
+ }
+
+ /**
+ * Returns a {@link BatchWrite} to write data to batch source.
+ *
+ * @deprecated use {@link #build()} instead.
+ */
+ @Deprecated
default BatchWrite buildForBatch() {
throw new UnsupportedOperationException(getClass().getName() +
Review comment:
@sunchao, I thought abou defaulting these implementations as
`build().toBatch()` but it will introduce a circular dependency between these
methods. There may be implementations that only implement `buildForBatch`, for
example. Right now, calling `buildForStreaming` will produce an exception for
them. If we default, this will result in a stackoverflow exception.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]