But that does present a problem if you have to change the DNS address of one of the HA namenodes. It forces you to update the config on all other clusters that want to talk to it. If you only have a few clusters that is probably not a big deal, but it can be problematic if you have many different clusters that talk to each other.
--Bobby On 11/4/13 4:15 PM, "lohit" <lohit.vijayar...@gmail.com> wrote: >Thanks Suresh! > > >2013/11/4 Suresh Srinivas <sur...@hortonworks.com> > >> Lohit, >> >> The option you have enumerated at the end is the current way to set up >> multi cluster >> environment. That is, all the client side configurations will include >>the >> following: >> - Logical service names (either for federation or HA) >> - The corresponding physical namenode addresses information >> >> For simpler management, one could use xml include to include an xml >> document >> that defines all the namespaces and namenodes. >> >> Regards, >> Suresh >> >> >> On Mon, Nov 4, 2013 at 2:02 PM, lohit <lohit.vijayar...@gmail.com> >>wrote: >> >> > Hello Devs, >> > >> > With hadoop 1.0 when there was single namespace. One could access any >> HDFS >> > cluster using any other hadoop config. Something like this >> > >> > hadoop --config /path/to/hadoop-cluster1 hdfs://hadoop-cluster2:8020/ >> > >> > Since NameNode host and port were passed directly as part of URI, if >>hdfs >> > client version matched, one could talk to different clusters without >> > needing to have access to cluster specific configuration. >> > >> > With Hadoop 2.0 or HA mode, we only specify logical name for namenode >>and >> > rely on hdfs-site.xml to resolve logical name to two underlying >>namenode >> > hosts. >> > >> > So, you cannot do something like >> > hadoop --config /path/to/hadoop-cluster1 >> > hdfs://hadoop-cluster2-logicalname/ >> > >> > since /path/to/hadoop-cluster1/hdfs-site.xml do not have information >> about >> > hadoop-cluster2-logicalname's namenodes. >> > >> > >> > One option is to add hadoop-cluster2-logicalname's namednodes to >> > /path/to/hadoop-cluster1/hdfs-site.xml. But with many clusters, this >> > becomes problem. >> > Is there any other cleaner approach to solving this? >> > >> > -- >> > Have a Nice Day! >> > Lohit >> > >> >> >> >> -- >> http://hortonworks.com/download/ >> >> -- >> CONFIDENTIALITY NOTICE >> NOTICE: This message is intended for the use of the individual or >>entity to >> which it is addressed and may contain information that is confidential, >> privileged and exempt from disclosure under applicable law. If the >>reader >> of this message is not the intended recipient, you are hereby notified >>that >> any printing, copying, dissemination, distribution, disclosure or >> forwarding of this communication is strictly prohibited. If you have >> received this communication in error, please contact the sender >>immediately >> and delete it from your system. Thank You. >> > > > >-- >Have a Nice Day! >Lohit