[hadoop] 31/50: HDFS-14017. [SBN read] ObserverReadProxyProviderWithIPFailover should work with HA configuration. Contributed by Chen Liang.

2019-07-25 Thread cliang
This is an automated email from the ASF dual-hosted git repository.

cliang pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 96cdd13de58c4b4bbb57751642547f53405fda9e
Author: Chen Liang 
AuthorDate: Fri Nov 16 17:30:29 2018 -0800

HDFS-14017. [SBN read] ObserverReadProxyProviderWithIPFailover should work 
with HA configuration. Contributed by Chen Liang.
---
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |  3 +
 .../ObserverReadProxyProviderWithIPFailover.java   | 97 +++---
 2 files changed, 89 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index 52a7cd0..00fb12d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -181,6 +181,9 @@ public interface HdfsClientConfigKeys {
   String DFS_NAMENODE_SNAPSHOT_CAPTURE_OPENFILES =
   "dfs.namenode.snapshot.capture.openfiles";
   boolean DFS_NAMENODE_SNAPSHOT_CAPTURE_OPENFILES_DEFAULT = false;
+  
+  String DFS_CLIENT_FAILOVER_IPFAILOVER_VIRTUAL_ADDRESS =
+  Failover.PREFIX + "ipfailover.virtual-address";
 
   /**
* These are deprecated config keys to client code.
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProviderWithIPFailover.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProviderWithIPFailover.java
index 1dbd02c..751bc3b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProviderWithIPFailover.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProviderWithIPFailover.java
@@ -17,24 +17,99 @@
  */
 package org.apache.hadoop.hdfs.server.namenode.ha;
 
-import java.io.IOException;
 import java.net.URI;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_FAILOVER_IPFAILOVER_VIRTUAL_ADDRESS;
 
 /**
- * ObserverReadProxyProvider with IPFailoverProxyProvider
- * as the failover method.
+ * Extends {@link ObserverReadProxyProvider} to support NameNode IP failover.
+ *
+ * For Observer reads a client needs to know physical addresses of all
+ * NameNodes, so that it could switch between active and observer nodes
+ * for write and read requests.
+ *
+ * Traditional {@link IPFailoverProxyProvider} works with a virtual
+ * address of the NameNode. If active NameNode fails the virtual address
+ * is assigned to the standby NameNode, and IPFailoverProxyProvider, which
+ * keeps talking to the same virtual address is in fact now connects to
+ * the new physical server.
+ *
+ * To combine these behaviors ObserverReadProxyProviderWithIPFailover
+ * should both
+ * 
+ *  Maintain all physical addresses of NameNodes in order to allow
+ * observer reads, and
+ *  Should rely on the virtual address of the NameNode in order to
+ * perform failover by assuming that the virtual address always points
+ * to the active NameNode.
+ * 
+ *
+ * An example of a configuration to leverage
+ * ObserverReadProxyProviderWithIPFailover
+ * should include the following values:
+ * {@code
+ * fs.defaultFS = hdfs://mycluster
+ * dfs.nameservices = mycluster
+ * dfs.ha.namenodes.mycluster = ha1,ha2
+ * dfs.namenode.rpc-address.mycluster.ha1 = nn01-ha1.com:8020
+ * dfs.namenode.rpc-address.mycluster.ha2 = nn01-ha2.com:8020
+ * dfs.client.failover.ipfailover.virtual-address.mycluster = nn01.com:8020
+ * dfs.client.failover.proxy.provider.mycluster =
+ * org.apache...ObserverReadProxyProviderWithIPFailover
+ * }
+ * Here {@code nn01.com:8020} is the virtual address of the active NameNode,
+ * while {@code nn01-ha1.com:8020} and {@code nn01-ha2.com:8020}
+ * are the physically addresses the two NameNodes.
+ *
+ * With this configuration, client will use
+ * ObserverReadProxyProviderWithIPFailover, which creates proxies for both
+ * nn01-ha1 and nn01-ha2, used for read/write RPC calls, but for the failover,
+ * it relies on the virtual address nn01.com
  */
-public class
-ObserverReadProxyProviderWithIPFailover
-extends ObserverReadProxyProvider {
 
+public class ObserverReadProxyProviderWithIPFailover
+extends ObserverReadProxyProvider {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  ObserverReadProxyProviderWithIPFailover.class);
+
+  /**
+   * By default ObserverReadProxyProviderWithIPFailover
+ 

[hadoop] 31/50: HDFS-14017. [SBN read] ObserverReadProxyProviderWithIPFailover should work with HA configuration. Contributed by Chen Liang.

2019-06-28 Thread cliang
This is an automated email from the ASF dual-hosted git repository.

cliang pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 03a2c60a4b6d14a9789c398d2cc0d902a4274621
Author: Chen Liang 
AuthorDate: Fri Nov 16 17:30:29 2018 -0800

HDFS-14017. [SBN read] ObserverReadProxyProviderWithIPFailover should work 
with HA configuration. Contributed by Chen Liang.
---
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |  3 +
 .../ObserverReadProxyProviderWithIPFailover.java   | 97 +++---
 2 files changed, 89 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
index a812670..20ea776 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java
@@ -187,6 +187,9 @@ public interface HdfsClientConfigKeys {
   String DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS =
   "dfs.provided.aliasmap.inmemory.dnrpc-address";
 
+  String DFS_CLIENT_FAILOVER_IPFAILOVER_VIRTUAL_ADDRESS =
+  Failover.PREFIX + "ipfailover.virtual-address";
+
   /**
* These are deprecated config keys to client code.
*/
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProviderWithIPFailover.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProviderWithIPFailover.java
index 1dbd02c..751bc3b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProviderWithIPFailover.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProviderWithIPFailover.java
@@ -17,24 +17,99 @@
  */
 package org.apache.hadoop.hdfs.server.namenode.ha;
 
-import java.io.IOException;
 import java.net.URI;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_CLIENT_FAILOVER_IPFAILOVER_VIRTUAL_ADDRESS;
 
 /**
- * ObserverReadProxyProvider with IPFailoverProxyProvider
- * as the failover method.
+ * Extends {@link ObserverReadProxyProvider} to support NameNode IP failover.
+ *
+ * For Observer reads a client needs to know physical addresses of all
+ * NameNodes, so that it could switch between active and observer nodes
+ * for write and read requests.
+ *
+ * Traditional {@link IPFailoverProxyProvider} works with a virtual
+ * address of the NameNode. If active NameNode fails the virtual address
+ * is assigned to the standby NameNode, and IPFailoverProxyProvider, which
+ * keeps talking to the same virtual address is in fact now connects to
+ * the new physical server.
+ *
+ * To combine these behaviors ObserverReadProxyProviderWithIPFailover
+ * should both
+ * 
+ *  Maintain all physical addresses of NameNodes in order to allow
+ * observer reads, and
+ *  Should rely on the virtual address of the NameNode in order to
+ * perform failover by assuming that the virtual address always points
+ * to the active NameNode.
+ * 
+ *
+ * An example of a configuration to leverage
+ * ObserverReadProxyProviderWithIPFailover
+ * should include the following values:
+ * {@code
+ * fs.defaultFS = hdfs://mycluster
+ * dfs.nameservices = mycluster
+ * dfs.ha.namenodes.mycluster = ha1,ha2
+ * dfs.namenode.rpc-address.mycluster.ha1 = nn01-ha1.com:8020
+ * dfs.namenode.rpc-address.mycluster.ha2 = nn01-ha2.com:8020
+ * dfs.client.failover.ipfailover.virtual-address.mycluster = nn01.com:8020
+ * dfs.client.failover.proxy.provider.mycluster =
+ * org.apache...ObserverReadProxyProviderWithIPFailover
+ * }
+ * Here {@code nn01.com:8020} is the virtual address of the active NameNode,
+ * while {@code nn01-ha1.com:8020} and {@code nn01-ha2.com:8020}
+ * are the physically addresses the two NameNodes.
+ *
+ * With this configuration, client will use
+ * ObserverReadProxyProviderWithIPFailover, which creates proxies for both
+ * nn01-ha1 and nn01-ha2, used for read/write RPC calls, but for the failover,
+ * it relies on the virtual address nn01.com
  */
-public class
-ObserverReadProxyProviderWithIPFailover
-extends ObserverReadProxyProvider {
 
+public class ObserverReadProxyProviderWithIPFailover
+extends ObserverReadProxyProvider {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  ObserverReadProxyProviderWithIPFailover.class);
+
+  /**
+   * By default ObserverReadProxyProviderWithIPFailover
+   * uses {@link IPFailoverProxyProvider} for failover.