Thanks. I actually wanted to ask what will happen if I restore the fsimage and the edits log and try to bring up the Hadoop cluster? I mentioned corruption in the metadata just to demonstrate some kind of problem in the metadata which I can't overcome. I assume there are tools to correct corruption in those files.
-----Original Message----- From: Ravi Prakash [mailto:[email protected]] Sent: Tuesday, August 30, 2011 4:01 PM To: [email protected] Subject: Re: Hadoop Backup & Restore Hi Avi, If you restored the metadata from Step 1, it would have no memory of what happened after that point. I guess you could try using the Offline Image Viewer and the OfflineEditsViewer tools, to read the corrupted metadata and see if you can recover the blocks from there. Cheers Ravi On Tue, Aug 30, 2011 at 7:05 AM, Avi Vaknin <[email protected]> wrote: > Hi All, > > While discussing about Hadoop backup & restore plan with my team I thought > about a scenario I wanted to ask you about: > > I wonder what will happen following the steps below: > > 1. Backup the namenode's metadata (fsimage & edits log). > > 2. Adding files to the Hadoop clutser. > > 3. Deleting files from the Hadoop cluster. > > 4. Corruption in the namenode's metadata or whatever. > > 5. Restoring the namenode's metadata which I backed up (step 1). > > How the namenode handle the blocks written or deleted before the corruption > ? > > Will I have access to them ? Is there any procedure I need to do in order > to > "fix" the gap created in the time since the last backup? > > > > Thanks. > > > > Avi > > ----- No virus found in this message. Checked by AVG - www.avg.com Version: 10.0.1392 / Virus Database: 1520/3866 - Release Date: 08/29/11
