Github user rdblue commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20386#discussion_r164909970
  
    --- Diff: 
sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceWriter.java
 ---
    @@ -63,32 +68,42 @@
       DataWriterFactory<Row> createWriterFactory();
     
       /**
    -   * Commits this writing job with a list of commit messages. The commit 
messages are collected from
    -   * successful data writers and are produced by {@link 
DataWriter#commit()}.
    +   * Handles a commit message which is collected from a successful data 
writer.
    +   *
    +   * Note that, implementations might need to cache all commit messages 
before calling
    +   * {@link #commit()} or {@link #abort()}.
        *
        * If this method fails (by throwing an exception), this writing job is 
considered to to have been
    -   * failed, and {@link #abort(WriterCommitMessage[])} would be called. 
The state of the destination
    -   * is undefined and @{@link #abort(WriterCommitMessage[])} may not be 
able to deal with it.
    +   * failed, and {@link #abort()} would be called. The state of the 
destination
    +   * is undefined and @{@link #abort()} may not be able to deal with it.
    +   */
    +  void add(WriterCommitMessage message);
    --- End diff --
    
    This is the only method shared between the stream and batch writers. Why 
does the streaming interface extend this one?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to