[
https://issues.apache.org/jira/browse/HADOOP-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12845425#action_12845425
]
Doug Cutting commented on HADOOP-6631:
--------------------------------------
Unix command line tools usually provide a good model for our tools. I think
this proposal roughly amounts to changing fullyDelete() from 'rm -r' to 'rm
-rf'. We might make metaphor even more explicit, having an option named
"force". I've never heard of a unix comand line tool that changes protections
so that it can remove things as it goes. Rather one uses 'chmod' first, no?
> FileUtil.fullyDelete() should continue to delete other files despite failure
> at any level.
> ------------------------------------------------------------------------------------------
>
> Key: HADOOP-6631
> URL: https://issues.apache.org/jira/browse/HADOOP-6631
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs, util
> Reporter: Vinod K V
> Fix For: 0.22.0
>
>
> Ravi commented about this on HADOOP-6536. Paraphrasing...
> Currently FileUtil.fullyDelete(myDir) comes out stopping deletion of other
> files/directories if it is unable to delete a file/dir(say because of not
> having permissions to delete that file/dir) anywhere under myDir. This is
> because we return from method if the recursive call "if(!fullyDelete())
> {return false;}" fails at any level of recursion.
> Shouldn't it continue with deletion of other files/dirs continuing in the for
> loop instead of returning false here ?
> I guess fullyDelete() should delete as many files as possible(similar to 'rm
> -rf').
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.