If it's not possible to restart the NN daemon on the same box, then yes. Warm Regards, Tariq https://mtariq.jux.com/ cloudfront.blogspot.com
On Wed, Apr 3, 2013 at 9:30 PM, Rahul Bhattacharjee <[email protected] > wrote: > Thanks to all of you for precise and complete responses. > > S > o in case of failure we have to bring another backup system up with the > fsimage and edit logs from the NFS filer. > SNN stays as is for the new NN. > > Thanks, > Rahul > > > On Wed, Apr 3, 2013 at 8:38 PM, Azuryy Yu <[email protected]> wrote: > >> for Hadoopv2, there is HA, so SNN is not necessary. >> On Apr 3, 2013 10:41 PM, "Rahul Bhattacharjee" <[email protected]> >> wrote: >> >>> Hi all, >>> >>> I was reading about Hadoop and got to know that there are two ways to >>> protect against the name node failures. >>> >>> 1) To write to a nfs mount along with the usual local disk. >>> -or- >>> 2) Use secondary name node. In case of failure of NN , the SNN can take >>> in charge. >>> >>> My questions :- >>> >>> 1) SNN is always lagging , so when SNN becomes primary in event of a NN >>> failure , then the edits which have not been merged into the image file >>> would be lost , so the system of SNN would not be consistent with the NN >>> before its failure. >>> >>> 2) Also I have read that other purpose of SNN is to periodically merge >>> the edit logs with the image file. In case a setup goes with option #1 >>> (writing to NFS, no SNN) , then who does this merging. >>> >>> Thanks, >>> Rahul >>> >>> >>> >
