Great you've found the solution!

best regards
Grzegorz Grzybek

2015-05-07 17:34 GMT+02:00 Karpov, Ilya <[email protected]>:

> Finally I figured out what was wrong: my hdfs-site.xml config was :)
> In case anybody will meet this problem and find this email, minimal conf is
> as follows:
> 1. use hdfs2://mycluster:8020/dir?opts
> 2. put hdfs-site.xml in classpath
> 3. minimal hdfs-site.xml content for HA:
> <property>
>     <name>dfs.nameservices</name>
>     <value>mycluster</value>
> </property>
> <property>
>     <name>dfs.ha.namenodes.mycluster</name>
>     <value>nn1,nn2</value>
> </property>
> <property>
>     <name>dfs.namenode.rpc-address.mycluster.nn1</name>
>     <value>nn1-host1:8020</value>
> </property>
> <property>
>     <name>dfs.namenode.rpc-address.mycluster.nn2</name>
>     <value>nn1-host2:8020</value>
> </property>
> <property>
>     <name>dfs.client.failover.proxy.provider.mycluster</name>
>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> </property>
>
>
> 2015-05-07 17:16 GMT+03:00 Grzegorz Grzybek <[email protected]>:
> >
> > please create JIRA issue, I'll investigate it then.
> >
> > regards
> > Grzegorz Grzybek
> >
> > 2015-05-07 14:55 GMT+02:00 Karpov, Ilya <[email protected]>:
> >
> > > Hi guys,
> > > I can't find a way to use camel-hdfs2 component for Hadoop in HA mode.
> > > I set my ha configuration in core-site.xml and hdfs-site.xml and put
> them
> > > in classpath, set nameservice name instead of host in hdfs2 endpoint
> uri,
> > > finally - no luck.
> > >
> > > Is this feature supported in any way?
> > > --
> > > *Ilya Karpov*
> > > Developer
> > >
> > > CleverDATA
> > > make your data clever
> > >
>
>
>
>
> --
> Ilya Karpov
> Developer
>
> CleverDATA
> make your data clever
>

Reply via email to