Florin,

That's odd - the feature works for me just fine. Did you restart your
NN after making the change (its a requirement to restart after most
config changes)?

The /user/{user.name}/.Trash folder would be created after the first
file/dir delete or so.

On Thu, Jun 9, 2011 at 6:24 PM, Florin P <florinp...@yahoo.com> wrote:
> Hello!
>  Really thank you for your answers. Unfortunately in the mentioned version 
> (hadoop-core-0.20.2-cdh3u1) from cloudera, this feature didn't work. I've 
> "managed" to delete /user/{user.name} with its content without any error 
> thrown. Also, I've looking for a ".Trash" folder for different users but it 
> doesn't exist.
>   Any ideas?
> Regards,
>  Florin
>
>
> --- On Thu, 6/9/11, Harsh J <ha...@cloudera.com> wrote:
>
>> From: Harsh J <ha...@cloudera.com>
>> Subject: Re: When rmr and rm strike
>> To: hdfs-user@hadoop.apache.org
>> Date: Thursday, June 9, 2011, 8:22 AM
>> Florin,
>>
>> Jakob's explained where the trashed files would go
>> (/user/{user.name}/.Trash/).
>> Interestingly, this would also safeguard against deleting
>> your own
>> home directory with an error such as:
>>
>> ➜  ~  hadoop dfs -rmr /user/harsh
>> Problem with Trash.. Consider using -skipTrash option
>> rmr: Cannot move "hdfs://localhost/user/harsh" to the
>> trash, as it
>> contains the trash
>>
>> Regarding the Cloudera bits; you're welcome to post
>> Cloudera usage
>> questions at cdh-u...@cloudera.org
>> lists (
>> https://groups.google.com/a/cloudera.org/group/cdh-user/topics
>> ).
>>
>> On Thu, Jun 9, 2011 at 4:35 PM, Jakob Homan <jgho...@gmail.com>
>> wrote:
>> > files that have been rm'ed but not yet expunged are
>> stored in each
>> > user's .Trash folder within their home directory.
>> This is the
>> > safeguard against accidentally deleting files; adding
>> a prompt is a
>> > non-starter.
>> >
>> > On Thu, Jun 9, 2011 at 2:17 AM, Florin P <florinp...@yahoo.com>
>> wrote:
>> >> Ok..Thank you...But where the deleted files are
>> stored? From which directory I can recover them? Is there
>> any property to set up this folder where deleted files are
>> kept? I've read something on the net, but since cloudera
>> version differs from the hadoop version (in some parts), all
>> the time I have to be sure that I'm properly doing the right
>> thing.
>> >>  Again thank for your answers... They are very
>> helpful.
>> >>
>> >> --- On Thu, 6/9/11, Harsh J <ha...@cloudera.com>
>> wrote:
>> >>
>> >>> From: Harsh J <ha...@cloudera.com>
>> >>> Subject: Re: When rmr and rm strike
>> >>> To: hdfs-user@hadoop.apache.org
>> >>> Date: Thursday, June 9, 2011, 3:52 AM
>> >>> Florin,
>> >>>
>> >>> In core-site.xml, simply set a value for
>> >>> "fs.trash.interval" as the
>> >>> number of minutes you want the trash to retain
>> items. A
>> >>> generally good
>> >>> value is 24 hours, i.e. "1440" minutes.
>> >>>
>> >>> On Thu, Jun 9, 2011 at 12:07 PM, Florin P
>> <florinp...@yahoo.com>
>> >>> wrote:
>> >>> > Hello!
>> >>> >  Thank you for you response. Regarding
>> the Trash
>> >>> feature, how can we properly configure this
>> feature (config
>> >>> files etc) in the mentioned hadoop version?
>> The information
>> >>> that I've taken from the Guide 
>> >>> (http://hadoop.apache.org/common/docs/r0.20.2/hdfs_design.html)
>> >>> is a little bit vague.
>> >>> >  Thank you again.
>> >>> > Regards,
>> >>> >  Florin
>> >>> >
>> >>> > --- On Wed, 6/8/11, Harsh J <ha...@cloudera.com>
>> >>> wrote:
>> >>> >
>> >>> >> From: Harsh J <ha...@cloudera.com>
>> >>> >> Subject: Re: When rmr and rm strike
>> >>> >> To: hdfs-user@hadoop.apache.org
>> >>> >> Date: Wednesday, June 8, 2011, 6:39
>> AM
>> >>> >> A question prompt option must be
>> >>> >> added, agreed.
>> >>> >>
>> >>> >> For your recovery question, did
>> you/do you have
>> >>> the HDFS
>> >>> >> Trash feature
>> >>> >> enabled in your cluster?
>> >>> >>
>> >>> >> On Wed, Jun 8, 2011 at 2:20 PM,
>> Florin P <florinp...@yahoo.com>
>> >>> >> wrote:
>> >>> >> > Hello!
>> >>> >> >   I'm using the hadoop version
>> from
>> >>> cloudera
>> >>> >>
>> hadoop-core-0.20.2-cdh3u1-SNAPSHOT.jar.
>> >>> >> >    Today I've made a mistake.
>> I have
>> >>> deleted my
>> >>> >> user from HDFS with the command
>> >>> >> >    hadoop fs -rmr
>> /user/my_user
>> >>> >> >
>> >>> >> > No question: are you sure to
>> delete? Are you
>> >>> really
>> >>> >> sure?
>> >>> >> > So panic...what to do? How can I
>> recover my
>> >>> lost
>> >>> >> data?
>> >>> >> >   From the above real scenario,
>> the
>> >>> following
>> >>> >> improvements and questions can
>> incur:
>> >>> >> > 1. Add a question when you are
>> deleting a
>> >>> folder or a
>> >>> >> file, such as "Are you sure you want
>> to delete
>> >>> X?"
>> >>> >> > 2. In order to automate the
>> delete process
>> >>> and surpass
>> >>> >> the above question, add an option to
>> pass the
>> >>> answer to it
>> >>> >> (sometimes you need this feature)
>> >>> >> > 3. How can recover a deleted
>> "user" with its
>> >>> >> associated data (in my case
>> "my_user")
>> >>> >> > 4. Where goes the data from a
>> deleted folder
>> >>> with the
>> >>> >> "rmr" otion?
>> >>> >> > 5. Where goes the data from a
>> deleted folder
>> >>> with the
>> >>> >> "rm" otion?
>> >>> >> > 6. How can be recovered
>> (undeleted) the data
>> >>> deleted
>> >>> >> from question 4 and 5?
>> >>> >> >
>> >>> >> > Thank you for your answers.
>> >>> >> >
>> >>> >> > Kind regards,
>> >>> >> >  Florin
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >> >
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> --
>> >>> >> Harsh J
>> >>> >>
>> >>> >
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Harsh J
>> >>>
>> >>
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>



-- 
Harsh J

Reply via email to