[ http://issues.apache.org/jira/browse/HADOOP-432?page=comments#action_12437014 ] p sutter commented on HADOOP-432: ---------------------------------
Yoram, Good point about 100% full. The solution could be to have a minimum-free-space parameter (does it already exist?). This would leave space for tempfiles, allow better performance, etc. Maybe dont go below 10% free, so that filesystem performance is still good. On the other hand: - memory usage, etc, I'm not sure I agree. The system is either able to handle a full filesystem, or it isnt. - performance, I'm not sure I agree. You dont need to delete anything, you could just overwrite and rename the existing blocks, which is less filesystem overhead. Even if you did delete and recreate, the total number of deletions is the same. And as for allowing undeletion when the filesystem is full, it seems better to have the space go to a useful purpose rather than have it reserved for deleted files. Anyway, its just a suggestion! You guys are doing the work, and therefore, its your choice! Thanks > support undelete, snapshots, or other mechanism to recover lost files > --------------------------------------------------------------------- > > Key: HADOOP-432 > URL: http://issues.apache.org/jira/browse/HADOOP-432 > Project: Hadoop > Issue Type: Improvement > Components: dfs > Reporter: Yoram Arnon > Assigned To: Wendy Chien > > currently, once you delete a file it's gone forever. > most file systems allow some form of recovery of deleted files. > a simple solution would be an 'undelete' command. > a more comprehensive solution would include snapshots, manual and automatic, > with scheduling options. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira