Github user fangshil commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20931#discussion_r179346911
  
    --- Diff: 
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
 ---
    @@ -186,7 +186,9 @@ class HadoopMapReduceCommitProtocol(
             logDebug(s"Clean up default partition directories for overwriting: 
$partitionPaths")
             for (part <- partitionPaths) {
               val finalPartPath = new Path(path, part)
    -          fs.delete(finalPartPath, true)
    +          if (!fs.delete(finalPartPath, true) && 
!fs.exists(finalPartPath.getParent)) {
    +            fs.mkdirs(finalPartPath.getParent)
    --- End diff --
    
    @cloud-fan  yes, in official HDFS 
document(https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/filesystem/filesystem.html),
 the rename command has precondition "dest must be root, or have a parent that 
exists"


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to