Hi Brian,

I just tested this functionality in a small cluster at my disposal. It seems that you need not specify the namenode hostname in the conf/masters file. Just specifying the secondary namenode hostname in the conf/masters file will do the job for you. To put it formally,
- if your cluster is already running, then shut it down.
- modify the masters file in conf directory to have only the hostname of the machine where you would start up your secondary namenode.
- run a ./bin/start-all.sh

once the startup completes, you can check with jps whether namenode and secondary namenodes started up in respective hosts.

hope this helps.

Pratyush

Pratyush Banerjee wrote:
Hi All,

I am facing the same confusions regarding setting up secondary namenode server in a separate machine. It would be very helpful if anyone can help us in the direction.

Pratyush

Brian Karlak wrote:
Hello All --

We have a 20-node cluster running 0.17.1. Currently, the secondary namenode process is running on the same machine as the primary namenode. We would like to move it to a separate machine in the cluster, as recommended. However, I cannot seem to find much in the way of documentation on how to do this properly.

I found:

http://hadoop.apache.org/core/docs/current/hdfs_user_guide.html#Secondary+Namenode

It is usually run on a different machine than the primary Namenode since its memory requirements are on the same order as the primary namemode. The secondary namenode is started by bin/start-dfs.sh on the nodes specified in conf/masters file.

as well as the "dfs.secondary.http.address" property in hadoop-default.xml file.

However, this seems a bit confusing. What is it in conf/masters that specifies that one hostname should be a master, and the other a secondary?

Many thanks in advance for any guidance anyone can give.

Brian


Reply via email to