[
https://issues.apache.org/jira/browse/HADOOP-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12844804#action_12844804
]
Ravi Gummadi commented on HADOOP-6631:
--------------------------------------
>So, we can go one step further and try to set writable permissions on failing
>directories and then try deleting them too. Completely failure can be only
>when the files/dirs are themselves non-deletable due to ownerhip issues.
Hmm... Would that be aggressive and may be harmful at times ? May be leaving
"setting permissions on failing directories" to the caller of fullyDelete() so
that he is aware of(and has the flexibility of) what he wants to do on any
subdir under myDir(that was given to fullyDelete()) could be safer option ?
> FileUtil.fullyDelete() should continue to delete other files despite failure
> at any level.
> ------------------------------------------------------------------------------------------
>
> Key: HADOOP-6631
> URL: https://issues.apache.org/jira/browse/HADOOP-6631
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs, util
> Reporter: Vinod K V
> Fix For: 0.22.0
>
>
> Ravi commented about this on HADOOP-6536. Paraphrasing...
> Currently FileUtil.fullyDelete(myDir) comes out stopping deletion of other
> files/directories if it is unable to delete a file/dir(say because of not
> having permissions to delete that file/dir) anywhere under myDir. This is
> because we return from method if the recursive call "if(!fullyDelete())
> {return false;}" fails at any level of recursion.
> Shouldn't it continue with deletion of other files/dirs continuing in the for
> loop instead of returning false here ?
> I guess fullyDelete() should delete as many files as possible(similar to 'rm
> -rf').
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.