[jira] [Created] (HDFS-12135) Invalid -z option used for nc in org.apache.hadoop.ha.SshFenceByTcpPort under CentOS 7

2017-07-13 Thread Luigi Di Fraia (JIRA)
Luigi Di Fraia created HDFS-12135:
-

 Summary: Invalid -z option used for nc in 
org.apache.hadoop.ha.SshFenceByTcpPort under CentOS 7
 Key: HDFS-12135
 URL: https://issues.apache.org/jira/browse/HDFS-12135
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.8.0
 Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[hadoop@namenode01 ~]$ uname -a
Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux
[hadoop@namenode01 ~]$ java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
Reporter: Luigi Di Fraia


During a failover scenario caused by the manual killing on the active NameNode 
process, having fuser failed in the first instance:

2017-07-13 15:59:36,851 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: 
SSH_MSG_NEWKEYS sent
2017-07-13 15:59:36,851 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: 
SSH_MSG_NEWKEYS received
2017-07-13 15:59:36,860 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: 
SSH_MSG_SERVICE_REQUEST sent
2017-07-13 15:59:36,861 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: 
SSH_MSG_SERVICE_ACCEPT received
2017-07-13 15:59:36,871 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: 
Authentications that can continue: 
gssapi-with-mic,publickey,keyboard-interactive,password
2017-07-13 15:59:36,871 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Next 
authentication method: gssapi-with-mic
2017-07-13 15:59:36,876 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: 
Authentications that can continue: publickey,keyboard-interactive,password
2017-07-13 15:59:36,876 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Next 
authentication method: publickey
2017-07-13 15:59:37,048 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: 
Authentication succeeded (publickey).
2017-07-13 15:59:37,049 INFO org.apache.hadoop.ha.SshFenceByTcpPort: Connected 
to namenode02
2017-07-13 15:59:37,049 INFO org.apache.hadoop.ha.SshFenceByTcpPort: Looking 
for process running on port 8020
2017-07-13 15:59:37,502 INFO org.apache.hadoop.ha.SshFenceByTcpPort: 
Indeterminate response from trying to kill service. Verifying whether it is 
running using nc...
2017-07-13 15:59:37,556 WARN org.apache.hadoop.ha.SshFenceByTcpPort: nc -z 
namenode02 8020 via ssh: nc: invalid option -- 'z'
2017-07-13 15:59:37,556 WARN org.apache.hadoop.ha.SshFenceByTcpPort: nc -z 
namenode02 8020 via ssh: Ncat: Try `--help' or man(1) ncat for more 
information, usage options and help. QUITTING.
2017-07-13 15:59:37,557 INFO org.apache.hadoop.ha.SshFenceByTcpPort: Verified 
that the service is down.
2017-07-13 15:59:37,557 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: 
Disconnecting from namenode02 port 22

This was raised with HDFS-11308 previously, closed as a duplicate of HDFS-3618 
which does not seem to have been resolved itself (PATCH AVAILABLE).

Also, the use of fuser is mentioned in the documentation 
(https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html)
 but the use of nc (as fallback?) is not mentioned.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-12 Thread Luigi Di Fraia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luigi Di Fraia resolved HDFS-12109.
---
Resolution: Not A Bug

> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
> -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>  -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
> -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
> -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /
> These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
> per below:
> 
> dfs.nameservices
> saccluster
> 
> 
> dfs.ha.namenodes.saccluster
> namenode01,namenode02
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode01
> namenode01:8020
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode02
> namenode02:8020
> 
> 
> dfs.namenode.http-address.saccluster.namenode01
> namenode01:50070
> 
> 
> dfs.namenode.http-address.saccluster.namenode02
> namenode02:50070
> 
> 
> dfs.namenode.shared.edits.dir
> 
> qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster
> 
> 
> dfs.client.failover.proxy.provider.mycluster
> 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
> In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as 
> per below:
> 
> fs.defaultFS
> hdfs://saccluster
> 
> In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:
> export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
> Is "fs" trying to read these properties from somewhere else, such as a 
> separate client configuration file?
> Apologies if I am missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-12 Thread Luigi Di Fraia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083520#comment-16083520
 ] 

Luigi Di Fraia edited comment on HDFS-12109 at 7/12/17 6:42 AM:


Thanks [~surendrasingh]. Appreciate your help with this. Indeed it was the 
property name that was using the wrong namespace. Oddly enough, the property I 
was passing on the commandline was correctly defined, which somehow masked out 
the hdfs-site.xml configuration issue.
I am resolving the issue as "Not a bug".
Thanks again.


was (Author: luigidifraia):
Thanks [~surendrasingh]. Appreciate your help with this. Indeed it was the 
property name that was using the wrong namespace.

> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
> -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>  -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
> -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
> -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /
> These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
> per below:
> 
> dfs.nameservices
> saccluster
> 
> 
> dfs.ha.namenodes.saccluster
> namenode01,namenode02
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode01
> namenode01:8020
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode02
> namenode02:8020
> 
> 
> dfs.namenode.http-address.saccluster.namenode01
> namenode01:50070
> 
> 
> dfs.namenode.http-address.saccluster.namenode02
> namenode02:50070
> 
> 
> dfs.namenode.shared.edits.dir
> 
> qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster
> 
> 
> dfs.client.failover.proxy.provider.mycluster
> 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
> In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as 
> per below:
> 
> fs.defaultFS
> hdfs://saccluster
> 
> In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:
> export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
> Is "fs" trying to read these properties from somewhere else, such as a 
> separate client configuration file?
> Apologies if I am missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-12 Thread Luigi Di Fraia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083520#comment-16083520
 ] 

Luigi Di Fraia commented on HDFS-12109:
---

Thanks [~surendrasingh]. Appreciate your help with this. Indeed it was the 
property name that was using the wrong namespace.

> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
> -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>  -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
> -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
> -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /
> These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
> per below:
> 
> dfs.nameservices
> saccluster
> 
> 
> dfs.ha.namenodes.saccluster
> namenode01,namenode02
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode01
> namenode01:8020
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode02
> namenode02:8020
> 
> 
> dfs.namenode.http-address.saccluster.namenode01
> namenode01:50070
> 
> 
> dfs.namenode.http-address.saccluster.namenode02
> namenode02:50070
> 
> 
> dfs.namenode.shared.edits.dir
> 
> qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster
> 
> 
> dfs.client.failover.proxy.provider.mycluster
> 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
> In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as 
> per below:
> 
> fs.defaultFS
> hdfs://saccluster
> 
> In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:
> export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
> Is "fs" trying to read these properties from somewhere else, such as a 
> separate client configuration file?
> Apologies if I am missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-11 Thread Luigi Di Fraia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081872#comment-16081872
 ] 

Luigi Di Fraia commented on HDFS-12109:
---

It's also probably worth mentioning that I am trying to use the HA NameNode 
setup with Accumulo 1.8.1 and I am having the same problem there (namenode 
service being used as if it were a hostname in a non-HA NameNode setup) when I 
try to init Accumulo or show volumes, as per below:

accumulo@namenode01 ~]$ /usr/local/accumulo/bin/accumulo admin volumes --list
2017-07-11 09:24:52,380 [start.Main] ERROR: Problem initializing the class 
loader
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.accumulo.start.Main.getClassLoader(Main.java:94)
at org.apache.accumulo.start.Main.main(Main.java:47)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
saccluster
at 
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:417)
at 
org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:130)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:343)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:287)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:156)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2811)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2848)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:181)
at 
org.apache.commons.vfs2.provider.hdfs.HdfsFileSystem.resolveFile(HdfsFileSystem.java:164)
at 
org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:84)
at 
org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider.findFile(AbstractOriginatingFileProvider.java:64)
at 
org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:804)
at 
org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:760)
at 
org.apache.commons.vfs2.impl.DefaultFileSystemManager.resolveFile(DefaultFileSystemManager.java:709)
at 
org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader.resolve(AccumuloVFSClassLoader.java:141)
at 
org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader.resolve(AccumuloVFSClassLoader.java:121)
at 
org.apache.accumulo.start.classloader.vfs.AccumuloVFSClassLoader.getClassLoader(AccumuloVFSClassLoader.java:211)

It was due to the above exception that I then went back one step and tried 
file-system commands for HDFS directly.

The Web UI for NameNodes on the active NameNode 
(http://namenode01:50070/dfshealth.html#tab-overview) is picking up the HA 
NameNode configuration just fine and showing the Namespace as expected, 
saccluster,

As a side note, without HA NameNode the setup has been working just fine for me 
for quite some time, including using Accumulo with HDFS. It seems like there's 
something missing in the way HA NameNode properties are exposed.

> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
> 

[jira] [Commented] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-11 Thread Luigi Di Fraia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081740#comment-16081740
 ] 

Luigi Di Fraia commented on HDFS-12109:
---

Thanks for your reply [~aw]. I exported the variables as per below for testing 
purposes:

[hadoop@namenode01 ~]$ export HADOOP_PREFIX=/usr/local/hadoop
[hadoop@namenode01 ~]$ export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop

However the issue persists. What I'd like to underline is that part of the 
configuration seems to be visible to file-system tools, based on the exception 
I get:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

Indeed "saccluster" is the nameservice I had configured and the default FS.

> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are defined as per below:
> /usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
> -Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
>  -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
> -Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
> -Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /
> These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
> per below:
> 
> dfs.nameservices
> saccluster
> 
> 
> dfs.ha.namenodes.saccluster
> namenode01,namenode02
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode01
> namenode01:8020
> 
> 
> dfs.namenode.rpc-address.saccluster.namenode02
> namenode02:8020
> 
> 
> dfs.namenode.http-address.saccluster.namenode01
> namenode01:50070
> 
> 
> dfs.namenode.http-address.saccluster.namenode02
> namenode02:50070
> 
> 
> dfs.namenode.shared.edits.dir
> 
> qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster
> 
> 
> dfs.client.failover.proxy.provider.mycluster
> 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> 
> In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as 
> per below:
> 
> fs.defaultFS
> hdfs://saccluster
> 
> In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:
> export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"
> Is "fs" trying to read these properties from somewhere else, such as a 
> separate client configuration file?
> Apologies if I am missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-10 Thread Luigi Di Fraia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luigi Di Fraia updated HDFS-12109:
--
Description: 
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as per 
below:


fs.defaultFS
hdfs://saccluster


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I am missing something obvious here.

  was:
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as per 
below:


fs.defaultFS
hdfs://saccluster


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.


> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" 

[jira] [Updated] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-10 Thread Luigi Di Fraia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luigi Di Fraia updated HDFS-12109:
--
Description: 
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/core-site.xml the default FS is defined as per 
below:


fs.defaultFS
hdfs://saccluster


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.

  was:
After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.


> "fs" java.net.UnknownHostException when HA NameNode is used
> ---
>
> Key: HDFS-12109
> URL: https://issues.apache.org/jira/browse/HDFS-12109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0
> Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
> [hadoop@namenode01 ~]$ uname -a
> Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> [hadoop@namenode01 ~]$ java -version
> java version "1.8.0_131"
> Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
> Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
>Reporter: Luigi Di Fraia
>
> After setting up an HA NameNode configuration, the following invocation of 
> "fs" fails:
> [hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
> -ls: java.net.UnknownHostException: saccluster
> It works if properties are 

[jira] [Created] (HDFS-12109) "fs" java.net.UnknownHostException when HA NameNode is used

2017-07-10 Thread Luigi Di Fraia (JIRA)
Luigi Di Fraia created HDFS-12109:
-

 Summary: "fs" java.net.UnknownHostException when HA NameNode is 
used
 Key: HDFS-12109
 URL: https://issues.apache.org/jira/browse/HDFS-12109
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Affects Versions: 2.8.0
 Environment: [hadoop@namenode01 ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[hadoop@namenode01 ~]$ uname -a
Linux namenode01 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux
[hadoop@namenode01 ~]$ java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
Reporter: Luigi Di Fraia


After setting up an HA NameNode configuration, the following invocation of "fs" 
fails:

[hadoop@namenode01 ~]$ /usr/local/hadoop/bin/hdfs dfs -ls /
-ls: java.net.UnknownHostException: saccluster

It works if properties are defined as per below:

/usr/local/hadoop/bin/hdfs dfs -Ddfs.nameservices=saccluster 
-Ddfs.client.failover.proxy.provider.saccluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
 -Ddfs.ha.namenodes.saccluster=namenode01,namenode02 
-Ddfs.namenode.rpc-address.saccluster.namenode01=namenode01:8020 
-Ddfs.namenode.rpc-address.saccluster.namenode02=namenode02:8020 -ls /

These properties are defined in /usr/local/hadoop/etc/hadoop/hdfs-site.xml as 
per below:


dfs.nameservices
saccluster


dfs.ha.namenodes.saccluster
namenode01,namenode02


dfs.namenode.rpc-address.saccluster.namenode01
namenode01:8020


dfs.namenode.rpc-address.saccluster.namenode02
namenode02:8020


dfs.namenode.http-address.saccluster.namenode01
namenode01:50070


dfs.namenode.http-address.saccluster.namenode02
namenode02:50070


dfs.namenode.shared.edits.dir

qjournal://namenode01:8485;namenode02:8485;datanode01:8485/saccluster


dfs.client.failover.proxy.provider.mycluster

org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


In /usr/local/hadoop/etc/hadoop/hadoop-env.sh the following export is defined:

export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

Is "fs" trying to read these properties from somewhere else, such as a separate 
client configuration file?

Apologies if I a missing something obvious here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org