[
https://issues.apache.org/jira/browse/HADOOP-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12579970#action_12579970
]
Tsz Wo (Nicholas), SZE commented on HADOOP-3025:
------------------------------------------------
- I think it is better to change FilterFileSystem.delete(Path f) (or even
FileSystem.delete(Path f)) to make it calling FilterFileSystem.delete(f, true),
instead of changing ChecksumFileSystem.delete(Path f)
- In ChecksumFileSystem.delete(Path f, boolean recursive), if f is a directory,
it calls fs.delete(f, recursive). I think the checksum files won't be deleted.
- We need a test for deleting a tree for testing the recursive parameter.
- In RawInMemoryFileSystem.delete(Path f, boolean recursive), the recursive
parameter is ignored.
> ChecksumFileSystem needs to support the new delete method
> ---------------------------------------------------------
>
> Key: HADOOP-3025
> URL: https://issues.apache.org/jira/browse/HADOOP-3025
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs
> Affects Versions: 0.17.0
> Reporter: Devaraj Das
> Assignee: Mahadev konar
> Priority: Blocker
> Fix For: 0.17.0
>
> Attachments: HADOOP_3025_1.patch, HADOOP_3025_2.patch,
> HADOOP_3025_3.patch
>
>
> The method FileSystem.delete(path) has been deprecated in favor of the new
> method delete(path, recursive). Temporary files gets created in the MapReduce
> framework and when the time for deletion comes, they are deleted via
> delete(path, recursive). This doesn't delete the associated checksum files.
> This has a big impact when the FileSystem is the InMemoryFileSystem, where
> space is at a premium and wasting space here might hurt the performance of
> MapReduce jobs overall. One solution to this problem is to implement the
> method delete(path, recursive) in the ChecksumFileSystem but is there is a
> reason why it was left out as part of HADOOP-771?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.