My guess is to put two set of this
dfs.ha.namenodes.clusterA=nn1,nn2
dfs.namenode.rpc-address.clusterA.nn1=
dfs.namenode.http-address.clusterA.nn1=
dfs.namenode.rpc-address.clusterA.nn2=
dfs.namenode.http-address.clusterA.nn2=
to the client setting, and then access it like hdfs://clusterA/tmp ...
I'm having an issue in client code where there are multiple clusters with HA
namenodes involved. Example setup using Hadoop 2.3.0:
Cluster A with the following properties defined in core, hdfs, etc:
dfs.nameservices=clusterA
dfs.ha.namenodes.clusterA=nn1,nn2