Oops...
I executed the following command:
./hadoop dfs -rmr .
Everything on the DFS, including the trash seems to be deleted. Is there
a way to recover my data?
Thanks,
Mathijs
--
Knowlogy
Helperpark 290 C
9723 ZA Groningen
[EMAIL PROTECTED]
+31 (0)6 15312977
http://www.knowlogy.nl
Thanks,
I stopped the namenode. How can I remove an entry from the editlog?
FYI: The following action caused the mistake:
I first copied a directory from the DFS to local:
./hadoop dfs -get segments/20070622192310 .
Then, I edited this line (from my command history) to delete the directory:
Mathijs Homminga wrote:
Thanks,
I stopped the namenode. How can I remove an entry from the editlog?
I wrote a tool for specifically this purpose ;) but it's not up to date
anymore - I'm not sure how much hacking is required to make it work
again. See HADOOP-915.
--
Best regards,
Andrzej
Mathijs Homminga wrote:
Thanks,
I stopped the namenode. How can I remove an entry from the editlog?
.. I forgot to add: if you feel adventurous (or desperate) enough, you
can use a binary editor and remove this DELETE record from the file. Be
sure to carefully read FSEditLog.logDelete() and
Since Hadoop 0.12, if you configure fs.trash.interval to a non-zero
value then 'bin/hadoop dfs -rm' will move things to a trash directory
instead of immediately removing them. The Trash is periodically emptied
of older items. Perhaps we should change the default value for this to
60 (one
You saved the day again.
The FSEditLogTool worked like a charm, without modifications
(https://issues.apache.org/jira/browse/HADOOP-915).
Here is what I did to perform an undelete of the root directory on my
HDFS (hadoop 0.12.2.)
- first, I ran a few tests on another dfs to make sure
Mathijs Homminga wrote:
You saved the day again.
The FSEditLogTool worked like a charm, without modifications
(https://issues.apache.org/jira/browse/HADOOP-915).
That's great, I was afraid it was out of sync with your version of Hadoop.
Here is what I did to perform an undelete