Github user rdblue commented on a diff in the pull request:
    --- Diff: 
    @@ -78,10 +88,11 @@ default void onDataWriterCommit(WriterCommitMessage 
message) {}
        * failed, and {@link #abort(WriterCommitMessage[])} would be called. 
The state of the destination
        * is undefined and @{@link #abort(WriterCommitMessage[])} may not be 
able to deal with it.
    -   * Note that, one partition may have multiple committed data writers 
because of speculative tasks.
    -   * Spark will pick the first successful one and get its commit message. 
Implementations should be
    -   * aware of this and handle it correctly, e.g., have a coordinator to 
make sure only one data
    -   * writer can commit, or have a way to clean up the data of 
already-committed writers.
    +   * Note that speculative execution may cause multiple tasks to run for a 
partition. By default,
    +   * Spark uses the commit coordinator to allow only one attempt to 
commit. Implementations can
    +   * disable this behavior by overriding {@link #useCommitCoordinator()}. 
If disabled, multiple
    +   * attempts may have committed successfully and all successful commit 
messages are passed to this
    --- End diff --
    I think we need to address this guarantee. Spark will just drop commit 
messages? That seems like a huge problem to me.
    cc @steveloughran 


To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to