Github user steveloughran commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19623#discussion_r148507067
  
    --- Diff: 
sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
 ---
    @@ -50,28 +53,34 @@
     
       /**
        * Creates a writer factory which will be serialized and sent to 
executors.
    +   *
    +   * If this method fails (by throwing an exception), the action would 
fail and no Spark job was
    +   * submitted.
        */
       DataWriterFactory<Row> createWriterFactory();
     
       /**
        * Commits this writing job with a list of commit messages. The commit 
messages are collected from
    -   * successful data writers and are produced by {@link 
DataWriter#commit()}. If this method
    -   * fails(throw exception), this writing job is considered to be failed, 
and
    -   * {@link #abort(WriterCommitMessage[])} will be called. The written 
data should only be visible
    -   * to data source readers if this method succeeds.
    +   * successful data writers and are produced by {@link 
DataWriter#commit()}.
    +   *
    +   * If this method fails (by throwing an exception), this writing job is 
considered to to have been
    +   * failed, and {@link #abort(WriterCommitMessage[])} would be called. 
The state of the destination
    +   * is undefined and @{@link #abort(WriterCommitMessage[])} may not be 
able to deal with it.
        *
        * Note that, one partition may have multiple committed data writers 
because of speculative tasks.
        * Spark will pick the first successful one and get its commit message. 
Implementations should be
    -   * aware of this and handle it correctly, e.g., have a mechanism to make 
sure only one data writer
    -   * can commit successfully, or have a way to clean up the data of 
already-committed writers.
    +   * aware of this and handle it correctly, e.g., have a coordinator to 
make sure only one data
    +   * writer can commit, or have a way to clean up the data of 
already-committed writers.
        */
       void commit(WriterCommitMessage[] messages);
     
       /**
        * Aborts this writing job because some data writers are failed to write 
the records and aborted,
    --- End diff --
    
    A single task failure shoudn't abort entire job. 
    Job abortion is more likely to be triggered by
    * Failure count of task exceeds configured limit. 
    * Non-recoverable failure of the commit() operation of one or more tasks. I 
don't see spark invoking `OutputCommitter.isRecoverySupported()` as its 
focusing more on "faster execution and recovery through retry".
    * pre-emption of job (if engine supports preemption)
    
    Looking in the code, it's called after `sparkContext.runJob()` throws an 
exception for any reason, & on fail of `FileFormatWriter.write()`, again, any 
reason. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to