[
https://issues.apache.org/jira/browse/HADOOP-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12864201#action_12864201
]
Ravi Gummadi commented on HADOOP-6631:
--------------------------------------
Will make fullyDelete() to continue even for the other cases of failure of
single file(in addition to the earlier handled case of deletion failure of
directory).
>> To optimize the non-writable directory case, we may want to do a check if
>> the parent-dir is writable or not in the beginning itself.
I see a case where we want to continue deletion of files under dir/subdirectory
even if the files or subdirectories under a dir can not be deleted. So not
doing this optimization.
> FileUtil.fullyDelete() should continue to delete other files despite failure
> at any level.
> ------------------------------------------------------------------------------------------
>
> Key: HADOOP-6631
> URL: https://issues.apache.org/jira/browse/HADOOP-6631
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs, util
> Reporter: Vinod K V
> Assignee: Ravi Gummadi
> Fix For: 0.22.0
>
> Attachments: hadoop-6631-y20s-1.patch, hadoop-6631-y20s-2.patch,
> HADOOP-6631.patch, HADOOP-6631.patch
>
>
> Ravi commented about this on HADOOP-6536. Paraphrasing...
> Currently FileUtil.fullyDelete(myDir) comes out stopping deletion of other
> files/directories if it is unable to delete a file/dir(say because of not
> having permissions to delete that file/dir) anywhere under myDir. This is
> because we return from method if the recursive call "if(!fullyDelete())
> {return false;}" fails at any level of recursion.
> Shouldn't it continue with deletion of other files/dirs continuing in the for
> loop instead of returning false here ?
> I guess fullyDelete() should delete as many files as possible(similar to 'rm
> -rf').
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.