In $HADOOP_CONF_DIR/core-site.xml I used to have: <property> <name>fs.default.name</name> <value>hdfs://somehostname:9000</value> </property>
When going to HDFS HA, I removed that property and have: <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> In the accumulo-site.xml I don't have instance.volumes or instance.dfs.uri. On Tue, Aug 5, 2014 at 1:41 PM, Keith Turner <[email protected]> wrote: > Did your HDFS URI change and are errors you are seeing connecting to the > old HDFS URI? If so, you may need to configure > instance.volumes.replacements to replace the old URI in Accumulo metadata. > > > On Tue, Aug 5, 2014 at 1:06 PM, craig w <[email protected]> wrote: > >> I've setup an Accumulo 1.6 cluster with Hadoop 2.4.0 (with a secondary >> namenode). I wanted to convert the secondary namenode to be a standby >> (hence HDFS HA). >> >> After getting HDFS HA up and making sure the hadoop configuration files >> were accessible by Accumulo, I started up Accumulo. I noticed some reports >> of tablet servers failing to connect, however, they were failing to connect >> to HDFS over port 9000. That port is not configured/used with HDFS HA so >> I'm unsure why they are still trying to talk to HDFS using the old >> configuration. >> >> Any thoughts ideas? I know Accumulo 1.6 works with HDFS HA, but I'm >> curious if the tests have ever been run against a non-HA cluster that was >> converted to HA (with data in it). >> >> -- >> >> https://github.com/mindscratch >> https://www.google.com/+CraigWickesser >> >> https://twitter.com/mind_scratch >> >> https://twitter.com/craig_links >> >> > -- https://github.com/mindscratch https://www.google.com/+CraigWickesser https://twitter.com/mind_scratch https://twitter.com/craig_links
