[ http://issues.apache.org/jira/browse/HADOOP-432?page=comments#action_12455021 ] Konstantin Shvachko commented on HADOOP-432: --------------------------------------------
You have 2 config parameters dfs.trash.mintime trash item expiration time dfs.trash.maxsize.pct maximal size of trash But the second parameter is not enforced, that is you do not guarantee that trash will not grow above the maxsize. If we do not want to support maxsize, then lets remove the config parameter. If we support it, then we need to determine how we enforce it. - Lets keep actual trashSize up-to-date at all times; - When trash dir is full, do we fail to moveToTrash, or do we permanently remove these files, or do we remove older files, and move to trash the requested ones? > support undelete, snapshots, or other mechanism to recover lost files > --------------------------------------------------------------------- > > Key: HADOOP-432 > URL: http://issues.apache.org/jira/browse/HADOOP-432 > Project: Hadoop > Issue Type: Improvement > Components: dfs > Reporter: Yoram Arnon > Assigned To: Wendy Chien > Attachments: undelete.patch, undelete11.patch, undelete12.patch > > > currently, once you delete a file it's gone forever. > most file systems allow some form of recovery of deleted files. > a simple solution would be an 'undelete' command. > a more comprehensive solution would include snapshots, manual and automatic, > with scheduling options. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira