Github user srowen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17135#discussion_r103907312
  
    --- Diff: 
core/src/main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala ---
    @@ -181,21 +181,26 @@ private[spark] object ReliableCheckpointRDD extends 
Logging {
           serializeStream.writeAll(iterator)
         } {
           serializeStream.close()
    +      fileOutputStream.close()
         }
     
    -    if (!fs.rename(tempOutputPath, finalOutputPath)) {
    -      if (!fs.exists(finalOutputPath)) {
    -        logInfo(s"Deleting tempOutputPath $tempOutputPath")
    -        fs.delete(tempOutputPath, false)
    -        throw new IOException("Checkpoint failed: failed to save output of 
task: " +
    -          s"${ctx.attemptNumber()} and final output path does not exist: 
$finalOutputPath")
    -      } else {
    -        // Some other copy of this task must've finished before us and 
renamed it
    -        logInfo(s"Final output path $finalOutputPath already exists; not 
overwriting it")
    -        if (!fs.delete(tempOutputPath, false)) {
    -          logWarning(s"Error deleting ${tempOutputPath}")
    +    try {
    --- End diff --
    
    Given that this doesn't encompass the span of usage for `fs` -- better to 
just call `fs.close()` at the end and not worry about manually closing in an 
error case? or expand the try-finally?
    
    Actually, I am not sure we are supposed to call `FileSystem.close()` 
because they are shared instances, cached and reused across the whole 
application.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to