[
https://issues.apache.org/jira/browse/HADOOP-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12844444#action_12844444
]
Vinod K V commented on HADOOP-6631:
-----------------------------------
We can simply go on with deleting other files/dirs as Ravi suggested.
One of the most common reasons why fullyDelete() fails to delete stuff is the
case of non-writable permissions on directories when issues like MAPREDUCE-896
happen. So, we can go one step further and try to set writable permissions on
failing directories and then try deleting them too. Completely failure can be
only when the files/dirs are themselves non-deletable due to ownerhip issues.
Thoughts?
> FileUtil.fullyDelete() should continue to delete other files despite failure
> at any level.
> ------------------------------------------------------------------------------------------
>
> Key: HADOOP-6631
> URL: https://issues.apache.org/jira/browse/HADOOP-6631
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs, util
> Reporter: Vinod K V
> Fix For: 0.22.0
>
>
> Ravi commented about this on HADOOP-6536. Paraphrasing...
> Currently FileUtil.fullyDelete(myDir) comes out stopping deletion of other
> files/directories if it is unable to delete a file/dir(say because of not
> having permissions to delete that file/dir) anywhere under myDir. This is
> because we return from method if the recursive call "if(!fullyDelete())
> {return false;}" fails at any level of recursion.
> Shouldn't it continue with deletion of other files/dirs continuing in the for
> loop instead of returning false here ?
> I guess fullyDelete() should delete as many files as possible(similar to 'rm
> -rf').
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.