Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/20386#discussion_r164932125
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/sources/v2/streaming/writer/StreamWriter.java
---
@@ -32,40 +32,44 @@
@InterfaceStability.Evolving
public interface StreamWriter extends DataSourceWriter {
/**
- * Commits this writing job for the specified epoch with a list of
commit messages. The commit
- * messages are collected from successful data writers and are produced
by
- * {@link DataWriter#commit()}.
+ * Commits this writing job for the specified epoch.
*
- * If this method fails (by throwing an exception), this writing job is
considered to have been
- * failed, and the execution engine will attempt to call {@link
#abort(WriterCommitMessage[])}.
+ * When this method is called, the number of commit messages added by
+ * {@link #add(WriterCommitMessage)} equals to the number of input data
partitions.
--- End diff --
how about `the number of data(RDD) partitions to write`?
> why it isn't obvious ...
Maybe we can just follow `FileCommitProtocol`, i.e. `commit` and `abort`
still takes an array of messages.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]