[
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16081872#comment-16081872
]
Luigi Di Fraia commented on HDFS-12109:
---------------------------------------
It's also probably worth mentioning that I am trying to use the HA NameNode
setup with Accumulo 1.8.1 and I am having the same problem there (namenode
service being used as if it were a hostname in a non-HA NameNode setup) when I
try to init Accumulo or show volumes, as per below:
accumulo@namenode01 ~]$ /usr/local/accumulo/bin/accumulo admin volumes --list
2017-07-11 09:24:52,380 [start.Main] ERROR: Problem initializing the class
loader
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.accumulo.start.Main.getClassLoader(Main.java:94)
at org.apache.accumulo.start.Main.main(Main.java:47)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException:
saccluster
at
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:417)
at
org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:130)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:343)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:287)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:156)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2848)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:181)
at
org.apache.commons.vfs2.provider.hdfs.HdfsFileSystem.resolveFile(HdfsFileSystem.java:164)
at
org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:84)
at
org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:64)
at
org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:804)
at
org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:760)
at
org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:709)
at
org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader.resolve(AccumuloVFSClassLoader.java:141)
at
org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader.resolve(AccumuloVFSClassLoader.java:121)
at
org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader.getClassLoader(AccumuloVFSClassLoader.java:211)
It was due to the above exception that I then went back one step and tried
file-system commands for HDFS directly.
The Web UI for NameNodes on the active NameNode
(http://namenode01:50070/dfshealth.html#tab-overview) is picking up the HA
NameNode configuration just fine and showing the Namespace as expected,
saccluster,
As a side note, without HA NameNode the setup has been working just fine for me
for quite some time, including using Accumulo with HDFS. It seems like there's
something missing in the way HA NameNode properties are exposed.
> "fs" java.net.UnknownHostException when HA NameNode is used
> -----------------------------------------------------------
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: fs
> Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
> Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster
> -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> -Ddfs.ha.namenodes.saccluster=namenode01,namenode02
> -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020
> -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /
> These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as
> per below:
> <property>
> <name>dfs.nameservices</name>
> <value>saccluster</value>
> </property>
> <property>
> <name>dfs.ha.namenodes.saccluster</name>
> <value>namenode01,namenode02</value>
> </property>
> <property>
> <name>dfs.namenode.rpc-address.saccluster.namenode01</name>
> <value>namenode01:8020</value>
> </property>
> <property>
> <name>dfs.namenode.rpc-address.saccluster.namenode02</name>
> <value>namenode02:8020</value>
> </property>
> <property>
> <name>dfs.namenode.http-address.saccluster.namenode01</name>
> <value>namenode01:50070</value>
> </property>
> <property>
> <name>dfs.namenode.http-address.saccluster.namenode02</name>
> <value>namenode02:50070</value>
> </property>
> <property>
> <name>dfs.namenode.shared.edits.dir</name>
>
> <value>qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster</value>
> </property>
> <property>
> <name>dfs.client.failover.proxy.provider.mycluster</name>
>
> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
> </property>
> In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as
> per below:
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://saccluster</value>
> </property>
> In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:
> export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
> Is "fs" trying to read these properties from somewhere else, such as a
> separate client configuration file?
> Apologies if I am missing something obvious here.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]