[
https://issues.apache.org/jira/browse/HDFS-2231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13082101#comment-13082101
]
Allen Wittenauer commented on HDFS-2231:
----------------------------------------
I think the general sentiment I'm left with (and I think Rajiv is sort of
alluding to) is that based upon this configuration information, the
architecture isn't operationally sound and/or we're trying to do too much
without a real understanding of how this works in practice.
It makes no sense to use a failover IP *and* have the clients know where the
failover IP might fail over. Especially keep in mind HDFS-34: what will end up
happening is that ops teams will end up having to eat up 5 addresses for 2
HA-NNs. (1 for the failover, 2 for each logical node, 2 for reach physical
node)
If we're trying to build both a Failover server and a Scalable service (to use
SunCluster terminology), then the configuration options for those are very very
different. In a failover scenario, the failover IP should be the *only*
configuration option that clients need. In a Scalable scenario, then the gang
of addresses should be the set. In other words, these options are mutually
exclusive.
> Configuration changes for HA namenode
> -------------------------------------
>
> Key: HDFS-2231
> URL: https://issues.apache.org/jira/browse/HDFS-2231
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Suresh Srinivas
> Assignee: Suresh Srinivas
> Fix For: HA branch (HDFS-1623)
>
>
> This jira tracks the changes required for configuring HA setup for namenodes.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira