[
https://issues.apache.org/jira/browse/HADOOP-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12851541#action_12851541
]
Ravi Gummadi commented on HADOOP-6631:
--------------------------------------
No chmod should be done explicitly. Let the caller of fullyDelete() do that if
needed. If permissions are not there for some file, that file will not be
deleted.
This JIRA fixes the issue of not continuing deletion of other files/directories
when some file could not deleted anywhere in the tree.
> FileUtil.fullyDelete() should continue to delete other files despite failure
> at any level.
> ------------------------------------------------------------------------------------------
>
> Key: HADOOP-6631
> URL: https://issues.apache.org/jira/browse/HADOOP-6631
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs, util
> Reporter: Vinod K V
> Fix For: 0.22.0
>
>
> Ravi commented about this on HADOOP-6536. Paraphrasing...
> Currently FileUtil.fullyDelete(myDir) comes out stopping deletion of other
> files/directories if it is unable to delete a file/dir(say because of not
> having permissions to delete that file/dir) anywhere under myDir. This is
> because we return from method if the recursive call "if(!fullyDelete())
> {return false;}" fails at any level of recursion.
> Shouldn't it continue with deletion of other files/dirs continuing in the for
> loop instead of returning false here ?
> I guess fullyDelete() should delete as many files as possible(similar to 'rm
> -rf').
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.