Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/20386#discussion_r165119427
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/sources/v2/streaming/writer/StreamWriter.java
---
@@ -32,40 +32,44 @@
@InterfaceStability.Evolving
public interface StreamWriter extends DataSourceWriter {
/**
- * Commits this writing job for the specified epoch with a list of
commit messages. The commit
- * messages are collected from successful data writers and are produced
by
- * {@link DataWriter#commit()}.
+ * Commits this writing job for the specified epoch.
*
- * If this method fails (by throwing an exception), this writing job is
considered to have been
- * failed, and the execution engine will attempt to call {@link
#abort(WriterCommitMessage[])}.
+ * When this method is called, the number of commit messages added by
+ * {@link #add(WriterCommitMessage)} equals to the number of input data
partitions.
--- End diff --
Passing the messages to commit and abort seems simpler and better to me,
but that's for the batch side. And, we shouldn't move forward with this unless
there's a use case.
As for the docs here, what is an implementer intended to understand as a
result of this? "The number of data partitions to write" is also misleading:
weren't these already written and committed by tasks?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]