[ 
http://issues.apache.org/jira/browse/HADOOP-432?page=comments#action_12458270 ] 
            
Doug Cutting commented on HADOOP-432:
-------------------------------------

I'd rather have most of the logic in the client, so that it's shared by all 
filesystem implementations.  The daemon logic should be runnable anywhere, not 
hardwired into the namenode: we could add an option to the namenode that causes 
it to run a generic FS trash-emptying daemon.  For other filesystems (e.g., S3) 
one should be able to, e.g., run a cron entry that runs a shell command to 
sweep out old files from the trash, or start a separate trash-emptying daemon 
with bin/hadoop-daemon.sh.

Also, will a global trash make sense if we add file protections?  Or should we 
instead have a trash directory per user?  If a user trashes something outside 
his or her home directory, then it would move to his-or-her home directory's 
trash.  That's the way it works on my linux desktop.



> support undelete, snapshots, or other mechanism to recover lost files
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-432
>                 URL: http://issues.apache.org/jira/browse/HADOOP-432
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Yoram Arnon
>         Assigned To: Wendy Chien
>         Attachments: undelete12.patch, undelete16.patch, undelete17.patch
>
>
> currently, once you delete a file it's gone forever.
> most file systems allow some form of recovery of deleted files.
> a simple solution would be an 'undelete' command.
> a more comprehensive solution would include snapshots, manual and automatic, 
> with scheduling options.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to