[ 
http://issues.apache.org/jira/browse/HADOOP-432?page=comments#action_12455068 ] 
            
Yoram Arnon commented on HADOOP-432:
------------------------------------

-yes, trashSize should be kept up to date.
-I don't understand the concept of 'trash dir is full'. It's just a location in 
dfs - it can't be full if the dfs isn't full.
-when taking out the trash, the idea is to remove the oldest items *only if 
they're older than mintime*, until the trash size is under maxsize, and then 
stop. 
-size of the trash may be larger than maxsize if items in the trash are too new.
-if the DFS fills up then normal writes will fail until mintime is reached for 
some trash items or mintime is decreased, or maxsize is decreased. 

-Perhaps an admin command than clears the trash now would help after a very 
large one-time deletion of data.

> support undelete, snapshots, or other mechanism to recover lost files
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-432
>                 URL: http://issues.apache.org/jira/browse/HADOOP-432
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Yoram Arnon
>         Assigned To: Wendy Chien
>         Attachments: undelete.patch, undelete11.patch, undelete12.patch
>
>
> currently, once you delete a file it's gone forever.
> most file systems allow some form of recovery of deleted files.
> a simple solution would be an 'undelete' command.
> a more comprehensive solution would include snapshots, manual and automatic, 
> with scheduling options.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to