[
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372050#comment-16372050
]
Greg Senia commented on HADOOP-15250:
-------------------------------------
Interface Information. The Interface *{color:#333333}eno33559296{color}* is the
cluster interface which is non-routable. Interface *eno16780032* ** is the
server network publically accessible interface. No Hadoop Traffic flows over
this interface unless its traffic originating/terminating outside of the Hadoop
Cluster. I've also included the routing table below and a network trace showing
the wrong IP being used due to the fact that Client.java binds outbound calls
to the hostname it finds in DNS or /etc/hosts.
[root@ha21d52mn yarn]# ifconfig -a
{color:#333333}*eno16780032: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu
1500*{color}
{color:#333333} *inet 10.69.81.1 netmask 255.255.240.0 broadcast
10.69.95.255*{color}
inet6 fe80::250:56ff:fe82:4934 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:82:49:34 txqueuelen 1000 (Ethernet)
RX packets 84514982 bytes 25768335637 (23.9 GiB)
RX errors 0 dropped 6172 overruns 0 frame 0
TX packets 83332181 bytes 18600190794 (17.3 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
{color:#333333}*eno33559296: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu
9000*{color}
{color:#333333} *inet 10.70.49.1 netmask 255.255.240.0 broadcast
10.70.63.255*{color}
inet6 fe80::250:56ff:fe82:379c prefixlen 64 scopeid 0x20<link>
ether 00:50:56:82:37:9c txqueuelen 1000 (Ethernet)
RX packets 1649741562 bytes 868670646085 (809.0 GiB)
RX errors 0 dropped 5052 overruns 0 frame 0
TX packets 1248707764 bytes 1782972383010 (1.6 TiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@ha21d52mn yarn]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
*default gateway 0.0.0.0 UG 0 0 0
eno16780032*
*10.69.80.0 0.0.0.0 255.255.240.0 U 0 0 0
eno16780032*
*10.70.48.0 0.0.0.0 255.255.240.0 U 0 0 0
eno33559296*
link-local 0.0.0.0 255.255.0.0 U 1002 0 0
eno16780032
link-local 0.0.0.0 255.255.0.0 U 1003 0 0
eno33559296
Here is an Example of the attempt to send traffic using the wrong interface
before the patch:
[root@ha21d52mn yarn]# tcpdump -s0 -i eno16780032 -nn host ha21t51nn.tech.hdp
or host ha21t52nn.tech.hdp and port 8020
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eno16780032, link-type EN10MB (Ethernet), capture size 65535 bytes
16:27:35.656967 IP 10.70.49.1.34065 > 10.69.49.7.8020: Flags [S], seq 5653335,
win 29200, options [mss 1460,sackOK,TS val 3923129189 ecr 0,nop,wscale 7],
length 0
16:27:36.659542 IP 10.70.49.1.34065 > 10.69.49.7.8020: Flags [S], seq 5653335,
win 29200, options [mss 1460,sackOK,TS val 3923130192 ecr 0,nop,wscale 7],
length 0
16:27:38.663551 IP *10.70.49.1*.34065 > 10.69.49.7.8020: Flags [S], seq
5653335, win 29200, options [mss 1460,sackOK,TS val 3923132196 ecr 0,nop,wscale
7], length 0
16:27:42.675539 IP 10.70.49.1.34065 > 10.69.49.7.8020: Flags [S], seq 5653335,
win 29200, options [mss 1460,sackOK,TS val 3923136208 ecr 0,nop,wscale 7],
length 0
^C
2018-02-21 16:23:55,075 INFO retry.RetryInvocationHandler
(RetryInvocationHandler.java:log(267)) - Exception while invoking
ClientNamenodeProtocolTranslatorPB.renewDelegationToken over
ha21t52nn.tech.hdp.example.com/10.69.49.7:8020 after 1 failover attempts.
Trying to failover after sleeping for 1290ms.
org.apache.hadoop.net.ConnectTimeoutException: Call From
ha21d52mn.unit.hdp.example.com/10.70.49.1 to
ha21t52nn.tech.hdp.example.com:8020 failed on socket timeout exception:
org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while
waiting for channel to be ready for connect. ch :
java.nio.channels.SocketChannel[connection-pending local=/10.70.49.1:35231
remote=ha21t52nn.tech.hdp.example.com/10.69.49.7:8020]; For more details see:
http://wiki.apache.org/hadoop/SocketTimeout
at sun.reflect.GeneratedConstructorAccessor205.newInstance(Unknown Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:751)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1556)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy93.renewDelegationToken(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewDelegationToken(ClientNamenodeProtocolTranslatorPB.java:993)
at sun.reflect.GeneratedMethodAccessor82.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy94.renewDelegationToken(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1141)
at org.apache.hadoop.security.token.Token.renew(Token.java:414)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:597)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:594)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1740)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:592)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:461)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$800(DelegationTokenRenewer.java:78)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:904)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:881)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout
while waiting for channel to be ready for connect. ch :
java.nio.channels.SocketChannel[connection-pending local=/10.70.49.1:35231
remote=ha21t52nn.tech.hdp.example.com/10.69.49.7:8020]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:650)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618)
at org.apache.hadoop.ipc.Client.call(Client.java:1449)
... 26 more
2018-02-21 16:24:16,387 INFO retry.RetryInvocationHandler
(RetryInvocationHandler.java:log(267)) - Exception while invoking
ClientNamenodeProtocolTranslatorPB.renewDelegationToken over
ha21t51nn.tech.hdp.example.com/10.69.49.6:8020 after 2 failover attempts.
Trying to failover after sleeping for 2615ms.
org.apache.hadoop.net.ConnectTimeoutException: Call From
ha21d52mn.unit.hdp.example.com/10.70.49.1 to
ha21t51nn.tech.hdp.example.com:8020 failed on socket timeout exception:
org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while
waiting for channel to be ready for connect. ch :
java.nio.channels.SocketChannel[connection-pending local=/10.70.49.1:37292
remote=ha21t51nn.tech.hdp.example.com/10.69.49.6:8020]; For more details see:
http://wiki.apache.org/hadoop/SocketTimeout
at sun.reflect.GeneratedConstructorAccessor205.newInstance(Unknown Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:751)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1556)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy93.renewDelegationToken(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewDelegationToken(ClientNamenodeProtocolTranslatorPB.java:993)
at sun.reflect.GeneratedMethodAccessor82.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy94.renewDelegationToken(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1141)
at org.apache.hadoop.security.token.Token.renew(Token.java:414)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:597)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:594)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1740)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:592)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:461)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$800(DelegationTokenRenewer.java:78)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:904)
at
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:881)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout
while waiting for channel to be ready for connect. ch :
java.nio.channels.SocketChannel[connection-pending local=/10.70.49.1:37292
remote=ha21t51nn.tech.hdp.example.com/10.69.49.6:8020]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:650)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745)
at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618)
at org.apache.hadoop.ipc.Client.call(Client.java:1449)
> MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --------------------------------------------------------------------
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
> Issue Type: Improvement
> Components: ipc, net
> Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Reporter: Greg Senia
> Priority: Critical
>
> We run our Hadoop clusters with two networks attached to each node. These
> network are as follows a server network that is firewalled with firewalld
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster
> network on the second network interface this uses Jumbo frames and is open no
> restrictions and allows all cluster traffic to flow between nodes.
>
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the
> traffic is originating from nodes with cluster networks we return the
> internal DNS record for the nodes. This all works fine with all the
> multi-homing features added to Hadoop 2.x
> Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing
> lookups. So hosts not on the cluster network should get answers from the
> external view in DNS
>
> So this brings me to our problem. We created some firewall rules to allow
> inbound traffic from each clusters server network to allow distcp to occur.
> But we noticed a problem almost immediately that when YARN attempted to talk
> to the Remote Cluster it was binding outgoing traffic to the cluster network
> interface which IS NOT routable. So after researching the code we noticed the
> following in NetUtils.java and Client.java
> Basically in Client.java it looks as if it takes whatever the hostname is and
> attempts to bind to whatever the hostname is resolved to. This is not valid
> in a multi-homed network with one routable interface and one non routable
> interface. After reading through the java.net.Socket documentation it is
> valid to perform socket.bind(null) which will allow the OS routing table and
> DNS to send the traffic to the correct interface. I will also attach the
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test
> fix below in my Hadoop Test Cluster.
> Client.java:
>
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | |{color:#FF0000}*InetAddress localAddr =
> NetUtils.getLocalInetAddress(host);*{color}|
> |{color:#FF0000} ** {color}|if (localAddr != null) {|
> | |this.socket.setReuseAddress(true);|
> | |if (LOG.isDebugEnabled()) {|
> | |LOG.debug("Binding " + principal + " to " + localAddr);|
> | |}|
> | |*{color:#FF0000}bindAddr = new InetSocketAddress(localAddr, 0);{color}*|
> | *{color:#FF0000}{color}* |*{color:#FF0000}}{color}*|
> | |}|
> | |}|
>
> So in my Hadoop 2.7.x Cluster I made the following changes and traffic flows
> correctly out the correct interfaces:
>
> diff --git
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
>
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> index e1be271..c5b4a42 100644
> ---
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> +++
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> @@ -305,6 +305,9 @@
> public static final String IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_KEY
> = "ipc.client.fallback-to-simple-auth-allowed";
> public static final boolean
> IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_DEFAULT = false;
>
> + public static final String IPC_CLIENT_NO_BIND_LOCAL_ADDR_KEY =
> "ipc.client.nobind.local.addr";
> + public static final boolean IPC_CLIENT_NO_BIND_LOCAL_ADDR_DEFAULT = false;
> +
> public static final String IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY =
> "ipc.client.connect.max.retries.on.sasl";
> public static final int IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_DEFAULT
> = 5;
> diff --git
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
>
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
> index a6f4eb6..7bfddb7 100644
> ---
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
> +++
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
> @@ -129,7 +129,9 @@ public static void setCallIdAndRetryCount(int cid, int
> rc) {
>
> private final int connectionTimeout;
>
> +
> private final boolean fallbackAllowed;
> + private final boolean noBindLocalAddr;
> private final byte[] clientId;
>
> final static int CONNECTION_CONTEXT_CALL_ID = -3;
> @@ -642,7 +644,11 @@ private synchronized void setupConnection() throws
> IOException {
> InetAddress localAddr = NetUtils.getLocalInetAddress(host);
> if (localAddr != null) {
> this.socket.setReuseAddress(true);
> - this.socket.bind(new InetSocketAddress(localAddr, 0));
> + if (noBindLocalAddr) {
> + this.socket.bind(null);
> + } else {
> + this.socket.bind(new InetSocketAddress(localAddr, 0));
> + }
> }
> }
> }
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]