secondary name node is not a fail-over for the namenode.
http://wiki.apache.org/hadoop/FAQ#7

Billy




"Rakhi Khatwani" <[email protected]> wrote in message news:[email protected]...
Hi,
   I successfully set up the secondary name node thing.
but i am having issues whn i perform the failover.

i have 5 nodes
node a - master
node b - slave
node c - slave
node d - slave
node e - secondary name node

Following are my steps:

1. configuration is as follows:
for all the nodes:
conf/master: node e
conf/slaves: node b
                  node c
                  node d
conf/hadoop-site: default-name: node a,
                         job-tracker: node a

2. ./start-dfs

3. added a couple of files in hadoop fs.
4. kill the namenode

5. changed the following properties for hadoop-site for node b, node c, node
d, node e
                        default-name: node e,
                         job-tracker: node e.



TRIAL1:
6 .copied the name dir from node a to node e

7 .executed the following command
./hadoop namenode -importCheckpoint

i get the following exception:
05/18 13:54:59 INFO metrics.RpcMetrics: Initializing RPC Metrics with
hostName=NameNode, port=44444
09/05/18 13:54:59 INFO namenode.NameNode: Namenode up at: germapp/
192.168.0.1:44444
09/05/18 13:54:59 INFO jvm.JvmMetrics: Initializing JVM Metrics with
processName=NameNode, sessionId=null
09/05/18 13:54:59 INFO metrics.NameNodeMetrics: Initializing
NameNodeMeterics using context
object:org.apache.hadoop.metrics.spi.NullContext
09/05/18 13:54:59 INFO namenode.FSNamesystem: fsOwner=ithurs,ithurs
09/05/18 13:54:59 INFO namenode.FSNamesystem: supergroup=supergroup
09/05/18 13:54:59 INFO namenode.FSNamesystem: isPermissionEnabled=true
09/05/18 13:54:59 INFO metrics.FSNamesystemMetrics: Initializing
FSNamesystemMetrics using context
object:org.apache.hadoop.metrics.spi.NullContext
09/05/18 13:54:59 INFO namenode.FSNamesystem: Registered
FSNamesystemStatusMBean
09/05/18 13:54:59 ERROR namenode.FSNamesystem: FSNamesystem initialization
failed.
java.io.IOException: Cannot import image from a checkpoint.  NameNode
already contains an image in /tmp/hadoop-ithurs/dfs/name
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:294)
       at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:290)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
09/05/18 13:54:59 INFO ipc.Server: Stopping server on 44444
09/05/18 13:54:59 ERROR namenode.NameNode: java.io.IOException: Cannot
import image from a checkpoint.  NameNode already contains an image in
/tmp/hadoop-ithurs/dfs/name
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:294)
       at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:290)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)


TRIAL 2:
6. skip copying:
7 .executed the following command
./hadoop namenode -importCheckpoint

i get the following exception:
6. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:
Directory /tmp/hadoop-ithurs/dfs/name is in an inconsistent state: storage
directory does not exist or is not accessible.
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:278)
       at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:290)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
09/05/18 14:13:41 INFO ipc.Server: Stopping server on 44444
09/05/18 14:13:41 ERROR namenode.NameNode:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory
/tmp/hadoop-ithurs/dfs/name is in an inconsistent state: storage directory
does not exist or is not accessible.
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:278)
       at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:290)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)

TRIAL 3:
6. create a new directory name in /tmp/hadoop-ithurs/dfs/
7 .executed the following command
./hadoop namenode -importCheckpoint

i get the following exception
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory
/tmp/hadoop-ithurs/dfs/namesecondary is in an inconsistent state:
/tmp/hadoop-ithurs/dfs/namesecondary/image does not exist.
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.isConversionNeeded(FSImage.java:645)
       at
org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:590)
       at
org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:61)
       at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:369)
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:273)
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.doImportCheckpoint(FSImage.java:504)
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:344)
       at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:290)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
09/05/18 14:15:12 INFO ipc.Server: Stopping server on 44444
09/05/18 14:15:12 ERROR namenode.NameNode:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory
/tmp/hadoop-ithurs/dfs/namesecondary is in an inconsistent state:
/tmp/hadoop-ithurs/dfs/namesecondary/image does not exist.
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.isConversionNeeded(FSImage.java:645)
       at
org.apache.hadoop.hdfs.server.common.Storage.checkConversionNeeded(Storage.java:590)
       at
org.apache.hadoop.hdfs.server.common.Storage.access$000(Storage.java:61)
       at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:369)
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:273)
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.doImportCheckpoint(FSImage.java:504)
       at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:344)
       at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
       at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:290)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
       at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)


any pointers/suggesstions?

Thanks,
Raakhi



On Fri, May 15, 2009 at 6:21 AM, jason hadoop <[email protected]>wrote:

the masters file only contains the secondary namenodes.
when you start-dfs.sh or start-all, the namenode, which is the master, is
started on the local machine, and secondary namenodes are started on each
host listed in conf/masters

This now confusing pattern is probably the result of some historical
requirement that we are unaware of.

Here are the relevant lines from bin/start-dfs.sh

# start dfs daemons
# start namenode after datanodes, to minimize time namenode is up w/o data
# note: datanodes will log connection errors until namenode starts
"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode
$nameStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode
$dataStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters start
secondarynamenode


On Thu, May 14, 2009 at 11:36 PM, Ninad Raut <[email protected]
>wrote:

> But if we have two master in the master file we have master and > secondary > node, *both *processes getting started on the two servers listed. Cant > we
> have master and secondary node started seperately on two machines??
>
> On Fri, May 15, 2009 at 9:39 AM, jason hadoop > <[email protected]
> >wrote:
>
> > I agree with billy. conf/masters is misleading as the place for
secondary
> > namenodes.
> >
> > On Thu, May 14, 2009 at 8:38 PM, Billy Pearson
> > <[email protected]>wrote:
> >
> > > I thank the secondary namenode is set in the masters file in the > > > conf
> > > folder
> > > misleading
> > >
> > > Billy
> > >
> > >
> > >
> > > "Rakhi Khatwani" > > > <[email protected]> wrote in > > > message
> > > news:[email protected]...
> > >
> > >  Hi,
> > >>    I wanna set up a cluster of 5 nodes in such a way that
> > >> node1 - master
> > >> node2 - secondary namenode
> > >> node3 - slave
> > >> node4 - slave
> > >> node5 - slave
> > >>
> > >>
> > >> How do we go about that?
> > >> there is no property in hadoop-env where i can set the ip-address
for
> > >> secondary name node.
> > >>
> > >> if i set node-1 and node-2 in masters, and when we start dfs, in
both
> > the
> > >> m/cs, the namenode n secondary namenode processes r present. but i
> think
> > >> only node1 is active.
> > >> n my namenode fail over operation fails.
> > >>
> > >> ny suggesstions?
> > >>
> > >> Regards,
> > >> Rakhi
> > >>
> > >>
> > >
> > >
> >
> >
> > --
> > Alpha Chapters of my book on Hadoop are available
> > http://www.apress.com/book/view/9781430219422
> > www.prohadoopbook.com a community for Hadoop Professionals
> >
>



--
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422
www.prohadoopbook.com a community for Hadoop Professionals




Reply via email to