[hadoop] branch branch-2.8 updated: HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by Daryn Sharp.

2019-02-20 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-2.8
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.8 by this push:
 new fa0f593  HADOOP-15813. Enable more reliable SSL connection reuse. 
Contributed by Daryn Sharp.
fa0f593 is described below

commit fa0f5933d30d07633548bfd08aa17ca553b8e1b8
Author: Daryn Sharp 
AuthorDate: Wed Feb 20 18:13:53 2019 -0800

HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by 
Daryn Sharp.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit a87e458432609b7a35a2abd6410b02e8a2ffc974)
(cherry picked from commit ae8839e6e8cc3e8f8d5e50525d3302038ada484b)
(cherry picked from commit 704330a616c17256b3e39370f28ba1c463e6)
(cherry picked from commit 4eccf2a3cc6b1468085f48ee267b2093b4f5be9d)
(cherry picked from commit 665cad03f30b1bc400a1991ccfd5053de6d86f6f)
(cherry picked from commit bee718488525e0af013149760e2bac9016f6363c)
---
 .../src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
index 45532bc..8d6cb83 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
@@ -85,6 +85,10 @@ public class SSLFactory implements ConnectionConfigurator {
   private Mode mode;
   private boolean requireClientCert;
   private SSLContext context;
+  // the java keep-alive cache relies on instance equivalence of the SSL socket
+  // factory.  in many java versions, SSLContext#getSocketFactory always
+  // returns a new instance which completely breaks the cache...
+  private SSLSocketFactory socketFactory;
   private HostnameVerifier hostnameVerifier;
   private KeyStoresFactory keystoresFactory;
 
@@ -150,6 +154,9 @@ public class SSLFactory implements ConnectionConfigurator {
 context.init(keystoresFactory.getKeyManagers(),
  keystoresFactory.getTrustManagers(), null);
 context.getDefaultSSLParameters().setProtocols(enabledProtocols);
+if (mode == Mode.CLIENT) {
+  socketFactory = context.getSocketFactory();
+}
 hostnameVerifier = getHostnameVerifier(conf);
   }
 
@@ -268,7 +275,7 @@ public class SSLFactory implements ConnectionConfigurator {
 if (mode != Mode.CLIENT) {
   throw new IllegalStateException("Factory is in CLIENT mode");
 }
-return context.getSocketFactory();
+return socketFactory;
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by Daryn Sharp.

2019-02-20 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 704330a  HADOOP-15813. Enable more reliable SSL connection reuse. 
Contributed by Daryn Sharp.
704330a is described below

commit 704330a616c17256b3e39370f28ba1c463e6
Author: Daryn Sharp 
AuthorDate: Wed Feb 20 18:13:53 2019 -0800

HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by 
Daryn Sharp.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit a87e458432609b7a35a2abd6410b02e8a2ffc974)
(cherry picked from commit ae8839e6e8cc3e8f8d5e50525d3302038ada484b)
---
 .../src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
index f05274a..8e8421b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
@@ -108,6 +108,10 @@ public class SSLFactory implements ConnectionConfigurator {
   private Mode mode;
   private boolean requireClientCert;
   private SSLContext context;
+  // the java keep-alive cache relies on instance equivalence of the SSL socket
+  // factory.  in many java versions, SSLContext#getSocketFactory always
+  // returns a new instance which completely breaks the cache...
+  private SSLSocketFactory socketFactory;
   private HostnameVerifier hostnameVerifier;
   private KeyStoresFactory keystoresFactory;
 
@@ -178,6 +182,9 @@ public class SSLFactory implements ConnectionConfigurator {
 context.init(keystoresFactory.getKeyManagers(),
  keystoresFactory.getTrustManagers(), null);
 context.getDefaultSSLParameters().setProtocols(enabledProtocols);
+if (mode == Mode.CLIENT) {
+  socketFactory = context.getSocketFactory();
+}
 hostnameVerifier = getHostnameVerifier(conf);
   }
 
@@ -298,7 +305,7 @@ public class SSLFactory implements ConnectionConfigurator {
   throw new IllegalStateException(
   "Factory is not in CLIENT mode. Actual mode is " + mode.toString());
 }
-return context.getSocketFactory();
+return socketFactory;
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.9 updated: HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by Daryn Sharp.

2019-02-20 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-2.9
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.9 by this push:
 new bee7184  HADOOP-15813. Enable more reliable SSL connection reuse. 
Contributed by Daryn Sharp.
bee7184 is described below

commit bee718488525e0af013149760e2bac9016f6363c
Author: Daryn Sharp 
AuthorDate: Wed Feb 20 18:13:53 2019 -0800

HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by 
Daryn Sharp.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit a87e458432609b7a35a2abd6410b02e8a2ffc974)
(cherry picked from commit ae8839e6e8cc3e8f8d5e50525d3302038ada484b)
(cherry picked from commit 704330a616c17256b3e39370f28ba1c463e6)
(cherry picked from commit 4eccf2a3cc6b1468085f48ee267b2093b4f5be9d)
(cherry picked from commit 665cad03f30b1bc400a1991ccfd5053de6d86f6f)
---
 .../src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
index 16b6784..8825965 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
@@ -85,6 +85,10 @@ public class SSLFactory implements ConnectionConfigurator {
   private Mode mode;
   private boolean requireClientCert;
   private SSLContext context;
+  // the java keep-alive cache relies on instance equivalence of the SSL socket
+  // factory.  in many java versions, SSLContext#getSocketFactory always
+  // returns a new instance which completely breaks the cache...
+  private SSLSocketFactory socketFactory;
   private HostnameVerifier hostnameVerifier;
   private KeyStoresFactory keystoresFactory;
 
@@ -150,6 +154,9 @@ public class SSLFactory implements ConnectionConfigurator {
 context.init(keystoresFactory.getKeyManagers(),
  keystoresFactory.getTrustManagers(), null);
 context.getDefaultSSLParameters().setProtocols(enabledProtocols);
+if (mode == Mode.CLIENT) {
+  socketFactory = context.getSocketFactory();
+}
 hostnameVerifier = getHostnameVerifier(conf);
   }
 
@@ -270,7 +277,7 @@ public class SSLFactory implements ConnectionConfigurator {
   throw new IllegalStateException(
   "Factory is not in CLIENT mode. Actual mode is " + mode.toString());
 }
-return context.getSocketFactory();
+return socketFactory;
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2 updated: HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by Daryn Sharp.

2019-02-20 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 665cad0  HADOOP-15813. Enable more reliable SSL connection reuse. 
Contributed by Daryn Sharp.
665cad0 is described below

commit 665cad03f30b1bc400a1991ccfd5053de6d86f6f
Author: Daryn Sharp 
AuthorDate: Wed Feb 20 18:13:53 2019 -0800

HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by 
Daryn Sharp.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit a87e458432609b7a35a2abd6410b02e8a2ffc974)
(cherry picked from commit ae8839e6e8cc3e8f8d5e50525d3302038ada484b)
(cherry picked from commit 704330a616c17256b3e39370f28ba1c463e6)
(cherry picked from commit 4eccf2a3cc6b1468085f48ee267b2093b4f5be9d)
---
 .../src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
index 16b6784..8825965 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
@@ -85,6 +85,10 @@ public class SSLFactory implements ConnectionConfigurator {
   private Mode mode;
   private boolean requireClientCert;
   private SSLContext context;
+  // the java keep-alive cache relies on instance equivalence of the SSL socket
+  // factory.  in many java versions, SSLContext#getSocketFactory always
+  // returns a new instance which completely breaks the cache...
+  private SSLSocketFactory socketFactory;
   private HostnameVerifier hostnameVerifier;
   private KeyStoresFactory keystoresFactory;
 
@@ -150,6 +154,9 @@ public class SSLFactory implements ConnectionConfigurator {
 context.init(keystoresFactory.getKeyManagers(),
  keystoresFactory.getTrustManagers(), null);
 context.getDefaultSSLParameters().setProtocols(enabledProtocols);
+if (mode == Mode.CLIENT) {
+  socketFactory = context.getSocketFactory();
+}
 hostnameVerifier = getHostnameVerifier(conf);
   }
 
@@ -270,7 +277,7 @@ public class SSLFactory implements ConnectionConfigurator {
   throw new IllegalStateException(
   "Factory is not in CLIENT mode. Actual mode is " + mode.toString());
 }
-return context.getSocketFactory();
+return socketFactory;
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.0 updated: HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by Daryn Sharp.

2019-02-20 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 4eccf2a  HADOOP-15813. Enable more reliable SSL connection reuse. 
Contributed by Daryn Sharp.
4eccf2a is described below

commit 4eccf2a3cc6b1468085f48ee267b2093b4f5be9d
Author: Daryn Sharp 
AuthorDate: Wed Feb 20 18:13:53 2019 -0800

HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by 
Daryn Sharp.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit a87e458432609b7a35a2abd6410b02e8a2ffc974)
(cherry picked from commit ae8839e6e8cc3e8f8d5e50525d3302038ada484b)
(cherry picked from commit 704330a616c17256b3e39370f28ba1c463e6)
---
 .../src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
index f05274a..8e8421b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
@@ -108,6 +108,10 @@ public class SSLFactory implements ConnectionConfigurator {
   private Mode mode;
   private boolean requireClientCert;
   private SSLContext context;
+  // the java keep-alive cache relies on instance equivalence of the SSL socket
+  // factory.  in many java versions, SSLContext#getSocketFactory always
+  // returns a new instance which completely breaks the cache...
+  private SSLSocketFactory socketFactory;
   private HostnameVerifier hostnameVerifier;
   private KeyStoresFactory keystoresFactory;
 
@@ -178,6 +182,9 @@ public class SSLFactory implements ConnectionConfigurator {
 context.init(keystoresFactory.getKeyManagers(),
  keystoresFactory.getTrustManagers(), null);
 context.getDefaultSSLParameters().setProtocols(enabledProtocols);
+if (mode == Mode.CLIENT) {
+  socketFactory = context.getSocketFactory();
+}
 hostnameVerifier = getHostnameVerifier(conf);
   }
 
@@ -298,7 +305,7 @@ public class SSLFactory implements ConnectionConfigurator {
   throw new IllegalStateException(
   "Factory is not in CLIENT mode. Actual mode is " + mode.toString());
 }
-return context.getSocketFactory();
+return socketFactory;
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by Daryn Sharp.

2019-02-20 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new ae8839e  HADOOP-15813. Enable more reliable SSL connection reuse. 
Contributed by Daryn Sharp.
ae8839e is described below

commit ae8839e6e8cc3e8f8d5e50525d3302038ada484b
Author: Daryn Sharp 
AuthorDate: Wed Feb 20 18:13:53 2019 -0800

HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by 
Daryn Sharp.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit a87e458432609b7a35a2abd6410b02e8a2ffc974)
---
 .../src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
index f05274a..8e8421b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
@@ -108,6 +108,10 @@ public class SSLFactory implements ConnectionConfigurator {
   private Mode mode;
   private boolean requireClientCert;
   private SSLContext context;
+  // the java keep-alive cache relies on instance equivalence of the SSL socket
+  // factory.  in many java versions, SSLContext#getSocketFactory always
+  // returns a new instance which completely breaks the cache...
+  private SSLSocketFactory socketFactory;
   private HostnameVerifier hostnameVerifier;
   private KeyStoresFactory keystoresFactory;
 
@@ -178,6 +182,9 @@ public class SSLFactory implements ConnectionConfigurator {
 context.init(keystoresFactory.getKeyManagers(),
  keystoresFactory.getTrustManagers(), null);
 context.getDefaultSSLParameters().setProtocols(enabledProtocols);
+if (mode == Mode.CLIENT) {
+  socketFactory = context.getSocketFactory();
+}
 hostnameVerifier = getHostnameVerifier(conf);
   }
 
@@ -298,7 +305,7 @@ public class SSLFactory implements ConnectionConfigurator {
   throw new IllegalStateException(
   "Factory is not in CLIENT mode. Actual mode is " + mode.toString());
 }
-return context.getSocketFactory();
+return socketFactory;
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by Daryn Sharp.

2019-02-20 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a87e458  HADOOP-15813. Enable more reliable SSL connection reuse. 
Contributed by Daryn Sharp.
a87e458 is described below

commit a87e458432609b7a35a2abd6410b02e8a2ffc974
Author: Daryn Sharp 
AuthorDate: Wed Feb 20 18:13:53 2019 -0800

HADOOP-15813. Enable more reliable SSL connection reuse. Contributed by 
Daryn Sharp.

Signed-off-by: Wei-Chiu Chuang 
---
 .../src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
index 3189b44..a7548aa 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
@@ -108,6 +108,10 @@ public class SSLFactory implements ConnectionConfigurator {
   private Mode mode;
   private boolean requireClientCert;
   private SSLContext context;
+  // the java keep-alive cache relies on instance equivalence of the SSL socket
+  // factory.  in many java versions, SSLContext#getSocketFactory always
+  // returns a new instance which completely breaks the cache...
+  private SSLSocketFactory socketFactory;
   private HostnameVerifier hostnameVerifier;
   private KeyStoresFactory keystoresFactory;
 
@@ -178,6 +182,9 @@ public class SSLFactory implements ConnectionConfigurator {
 context.init(keystoresFactory.getKeyManagers(),
  keystoresFactory.getTrustManagers(), null);
 context.getDefaultSSLParameters().setProtocols(enabledProtocols);
+if (mode == Mode.CLIENT) {
+  socketFactory = context.getSocketFactory();
+}
 hostnameVerifier = getHostnameVerifier(conf);
   }
 
@@ -298,7 +305,7 @@ public class SSLFactory implements ConnectionConfigurator {
   throw new IllegalStateException(
   "Factory is not in CLIENT mode. Actual mode is " + mode.toString());
 }
-return context.getSocketFactory();
+return socketFactory;
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14273. Fix checkstyle issues in BlockLocation's method javadoc (Contributed by Shweta Yakkali via Daniel Templeton)

2019-02-20 Thread templedf
This is an automated email from the ASF dual-hosted git repository.

templedf pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 371a6db  HDFS-14273. Fix checkstyle issues in BlockLocation's method 
javadoc (Contributed by Shweta Yakkali via Daniel Templeton)
371a6db is described below

commit 371a6db59ad8891cf0b5101fadee74d31ea2a895
Author: Shweta Yakkali 
AuthorDate: Wed Feb 20 15:36:14 2019 -0800

HDFS-14273. Fix checkstyle issues in BlockLocation's method javadoc
(Contributed by Shweta Yakkali via Daniel Templeton)

Change-Id: I546aa4a0fe7f83b53735acd9925f366b2f1a00e2
---
 .../java/org/apache/hadoop/fs/BlockLocation.java   | 36 +++---
 1 file changed, 18 insertions(+), 18 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockLocation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockLocation.java
index 37f0309..c6dde52 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockLocation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BlockLocation.java
@@ -77,14 +77,14 @@ public class BlockLocation implements Serializable {
   new StorageType[0];
 
   /**
-   * Default Constructor
+   * Default Constructor.
*/
   public BlockLocation() {
 this(EMPTY_STR_ARRAY, EMPTY_STR_ARRAY, 0L, 0L);
   }
 
   /**
-   * Copy constructor
+   * Copy constructor.
*/
   public BlockLocation(BlockLocation that) {
 this.hosts = that.hosts;
@@ -99,7 +99,7 @@ public class BlockLocation implements Serializable {
   }
 
   /**
-   * Constructor with host, name, offset and length
+   * Constructor with host, name, offset and length.
*/
   public BlockLocation(String[] names, String[] hosts, long offset, 
long length) {
@@ -107,7 +107,7 @@ public class BlockLocation implements Serializable {
   }
 
   /**
-   * Constructor with host, name, offset, length and corrupt flag
+   * Constructor with host, name, offset, length and corrupt flag.
*/
   public BlockLocation(String[] names, String[] hosts, long offset, 
long length, boolean corrupt) {
@@ -115,7 +115,7 @@ public class BlockLocation implements Serializable {
   }
 
   /**
-   * Constructor with host, name, network topology, offset and length
+   * Constructor with host, name, network topology, offset and length.
*/
   public BlockLocation(String[] names, String[] hosts, String[] topologyPaths,
long offset, long length) {
@@ -124,7 +124,7 @@ public class BlockLocation implements Serializable {
 
   /**
* Constructor with host, name, network topology, offset, length 
-   * and corrupt flag
+   * and corrupt flag.
*/
   public BlockLocation(String[] names, String[] hosts, String[] topologyPaths,
long offset, long length, boolean corrupt) {
@@ -176,21 +176,21 @@ public class BlockLocation implements Serializable {
   }
 
   /**
-   * Get the list of hosts (hostname) hosting this block
+   * Get the list of hosts (hostname) hosting this block.
*/
   public String[] getHosts() throws IOException {
 return hosts;
   }
 
   /**
-   * Get the list of hosts (hostname) hosting a cached replica of the block
+   * Get the list of hosts (hostname) hosting a cached replica of the block.
*/
   public String[] getCachedHosts() {
-   return cachedHosts;
+return cachedHosts;
   }
 
   /**
-   * Get the list of names (IP:xferPort) hosting this block
+   * Get the list of names (IP:xferPort) hosting this block.
*/
   public String[] getNames() throws IOException {
 return names;
@@ -219,14 +219,14 @@ public class BlockLocation implements Serializable {
   }
 
   /**
-   * Get the start offset of file associated with this block
+   * Get the start offset of file associated with this block.
*/
   public long getOffset() {
 return offset;
   }
   
   /**
-   * Get the length of the block
+   * Get the length of the block.
*/
   public long getLength() {
 return length;
@@ -247,14 +247,14 @@ public class BlockLocation implements Serializable {
   }
 
   /**
-   * Set the start offset of file associated with this block
+   * Set the start offset of file associated with this block.
*/
   public void setOffset(long offset) {
 this.offset = offset;
   }
 
   /**
-   * Set the length of block
+   * Set the length of block.
*/
   public void setLength(long length) {
 this.length = length;
@@ -268,7 +268,7 @@ public class BlockLocation implements Serializable {
   }
 
   /**
-   * Set the hosts hosting this block
+   * Set the hosts hosting this block.
*/
   public void setHosts(String[] hosts) throws IOException {
 if (hosts == null) {
@@ -279,7 +279,7 @@ public class BlockLocation implements Serializable {
   

[hadoop] branch branch-2.8 updated: HDFS-14219. ConcurrentModificationException occurs in datanode occasionally. Contributed by Tao Jie.

2019-02-20 Thread arp
This is an automated email from the ASF dual-hosted git repository.

arp pushed a commit to branch branch-2.8
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.8 by this push:
 new 3c145a7  HDFS-14219. ConcurrentModificationException occurs in 
datanode occasionally. Contributed by Tao Jie.
3c145a7 is described below

commit 3c145a7ef0171ef248d7916c366778de95bc0d4b
Author: Arpit Agarwal 
AuthorDate: Wed Feb 20 15:04:33 2019 -0800

HDFS-14219. ConcurrentModificationException occurs in datanode 
occasionally. Contributed by Tao Jie.
---
 .../hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 36d68d0..b14f9e9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -1876,7 +1876,7 @@ class FsDatasetImpl implements FsDatasetSpi 
{
 new HashMap();
 
 List curVolumes = null;
-synchronized(this) {
+try (AutoCloseableLock lock = datasetLock.acquire()) {
   curVolumes = volumes.getVolumes();
   for (FsVolumeSpi v : curVolumes) {
 builders.put(v.getStorageID(), 
BlockListAsLongs.builder(maxDataLength));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1109. Setup Failover Proxy Provider for OM client.

2019-02-20 Thread hanishakoneru
This is an automated email from the ASF dual-hosted git repository.

hanishakoneru pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b1397ff  HDDS-1109. Setup Failover Proxy Provider for OM client.
b1397ff is described below

commit b1397ff9e4717a3397ba8606b505d7b8e36c2eb2
Author: Hanisha Koneru 
AuthorDate: Wed Feb 20 14:49:59 2019 -0800

HDDS-1109. Setup Failover Proxy Provider for OM client.
---
 .../apache/hadoop/ozone/client/ObjectStore.java|   5 +
 .../ozone/client/protocol/ClientProtocol.java  |   5 +
 .../hadoop/ozone/client/rest/RestClient.java   |   6 +
 .../apache/hadoop/ozone/client/rpc/RpcClient.java  |  23 ++-
 .../hadoop/ozone/client/rpc/ha/OMProxyInfo.java|  49 ++
 .../ozone/client/rpc/ha/OMProxyProvider.java   | 177 +
 .../hadoop/ozone/client/rpc/ha/package-info.java   |  23 +++
 .../client/rpc/TestOzoneRpcClientAbstract.java |  19 +++
 .../apache/hadoop/ozone/om/TestOzoneManagerHA.java |  44 -
 9 files changed, 336 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
index a6fb818..aa7cb4f 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
@@ -72,6 +72,11 @@ public class ObjectStore {
 proxy = null;
   }
 
+  @VisibleForTesting
+  public ClientProtocol getClientProxy() {
+return proxy;
+  }
+
   /**
* Creates the volume with default values.
* @param volumeName Name of the volume to be created.
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
index ef710d5..494afae 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.ozone.client.protocol;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hdds.protocol.StorageType;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.io.Text;
@@ -28,6 +29,7 @@ import org.apache.hadoop.hdds.client.ReplicationFactor;
 import org.apache.hadoop.hdds.client.ReplicationType;
 import org.apache.hadoop.ozone.client.io.OzoneInputStream;
 import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.client.rpc.ha.OMProxyProvider;
 import org.apache.hadoop.ozone.om.helpers.OmMultipartInfo;
 import org.apache.hadoop.ozone.om.helpers.OmMultipartUploadCompleteInfo;
 
@@ -506,4 +508,7 @@ public interface ClientProtocol {
* @throws IOException
*/
   S3SecretValue getS3Secret(String kerberosID) throws IOException;
+
+  @VisibleForTesting
+  OMProxyProvider getOMProxyProvider();
 }
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
index ba21ca7..b69d972 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
@@ -42,6 +42,7 @@ import org.apache.hadoop.ozone.client.rest.headers.Header;
 import org.apache.hadoop.ozone.client.rest.response.BucketInfo;
 import org.apache.hadoop.ozone.client.rest.response.KeyInfoDetails;
 import org.apache.hadoop.ozone.client.rest.response.VolumeInfo;
+import org.apache.hadoop.ozone.client.rpc.ha.OMProxyProvider;
 import org.apache.hadoop.ozone.om.OMConfigKeys;
 import org.apache.hadoop.ozone.om.helpers.OmMultipartInfo;
 import org.apache.hadoop.ozone.om.helpers.OmMultipartUploadCompleteInfo;
@@ -724,6 +725,11 @@ public class RestClient implements ClientProtocol {
   }
 
   @Override
+  public OMProxyProvider getOMProxyProvider() {
+return null;
+  }
+
+  @Override
   public OzoneInputStream getKey(
   String volumeName, String bucketName, String keyName)
   throws IOException {
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
index fec0530..2c38569 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.ozone.client.rpc;
 
+import com.google.common.annotations.VisibleForTesting;
 import 

[hadoop] branch trunk updated: HDFS-14081. hdfs dfsadmin -metasave metasave_test results NPE. Contributed by Shweta Yakkali.

2019-02-20 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1bea785  HDFS-14081. hdfs dfsadmin -metasave metasave_test results 
NPE. Contributed by Shweta Yakkali.
1bea785 is described below

commit 1bea785020a538115b3e08f41ff88167033d2775
Author: Shweta Yakkali 
AuthorDate: Wed Feb 20 14:28:37 2019 -0800

HDFS-14081. hdfs dfsadmin -metasave metasave_test results NPE. Contributed 
by Shweta Yakkali.

Signed-off-by: Wei-Chiu Chuang 
---
 .../hdfs/server/blockmanagement/BlockManager.java  | 13 +---
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  |  4 +--
 .../org/apache/hadoop/hdfs/tools/DFSAdmin.java | 14 +++--
 .../server/blockmanagement/TestBlockManager.java   | 35 ++
 .../hadoop/hdfs/tools/TestDFSAdminWithHA.java  | 12 ++--
 5 files changed, 67 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 740f9ca..6d142f9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -839,8 +839,13 @@ public class BlockManager implements BlockStatsMXBean {
   new ArrayList();
 
 NumberReplicas numReplicas = new NumberReplicas();
+BlockInfo blockInfo = getStoredBlock(block);
+if (blockInfo == null) {
+  out.println("Block "+ block + " is Null");
+  return;
+}
 // source node returned is not used
-chooseSourceDatanodes(getStoredBlock(block), containingNodes,
+chooseSourceDatanodes(blockInfo, containingNodes,
 containingLiveReplicasNodes, numReplicas,
 new ArrayList(), LowRedundancyBlocks.LEVEL);
 
@@ -849,7 +854,7 @@ public class BlockManager implements BlockStatsMXBean {
 assert containingLiveReplicasNodes.size() >= numReplicas.liveReplicas();
 int usableReplicas = numReplicas.liveReplicas() +
  numReplicas.decommissionedAndDecommissioning();
-
+
 if (block instanceof BlockInfo) {
   BlockCollection bc = getBlockCollection((BlockInfo)block);
   String fileName = (bc == null) ? "[orphaned]" : bc.getName();
@@ -1765,8 +1770,8 @@ public class BlockManager implements BlockStatsMXBean {
 this.shouldPostponeBlocksFromFuture  = postpone;
   }
 
-
-  private void postponeBlock(Block blk) {
+  @VisibleForTesting
+  void postponeBlock(Block blk) {
 postponedMisreplicatedBlocks.add(blk);
   }
   
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 8659ea4..d0fdbac 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -1768,10 +1768,10 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   void metaSave(String filename) throws IOException {
 String operationName = "metaSave";
 checkSuperuserPrivilege(operationName);
-checkOperation(OperationCategory.UNCHECKED);
+checkOperation(OperationCategory.READ);
 writeLock();
 try {
-  checkOperation(OperationCategory.UNCHECKED);
+  checkOperation(OperationCategory.READ);
   File file = new File(System.getProperty("hadoop.log.dir"), filename);
   PrintWriter out = new PrintWriter(new BufferedWriter(
   new OutputStreamWriter(new FileOutputStream(file), Charsets.UTF_8)));
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
index aa67e72..d1f8362 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
@@ -94,6 +94,7 @@ import org.apache.hadoop.ipc.RefreshResponse;
 import org.apache.hadoop.ipc.RemoteException;
 import 
org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolClientSideTranslatorPB;
 import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolPB;
+import org.apache.hadoop.ipc.StandbyException;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.RefreshUserMappingsProtocol;
 import org.apache.hadoop.security.SecurityUtil;
@@ -1537,11 +1538,20 @@ public class DFSAdmin extends FsShell {
   

[hadoop] 11/41: HDFS-14089. RBF: Failed to specify server's Kerberos pricipal name in NamenodeHeartbeatService. Contributed by Ranith Sardar.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 3f12355bb3a7a89901b966bcab0556a1d6bf9e23
Author: Brahma Reddy Battula 
AuthorDate: Thu Nov 22 08:26:22 2018 +0530

HDFS-14089. RBF: Failed to specify server's Kerberos pricipal name in 
NamenodeHeartbeatService. Contributed by Ranith Sardar.
---
 .../hdfs/server/federation/router/NamenodeHeartbeatService.java | 3 ++-
 .../java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java | 6 --
 2 files changed, 2 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
index 1349aa3..871ebaf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
@@ -38,6 +38,7 @@ import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.NamenodeStatusReport;
 import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol;
 import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
+import org.apache.hadoop.hdfs.tools.DFSHAAdmin;
 import org.apache.hadoop.hdfs.tools.NNHAServiceTarget;
 import org.codehaus.jettison.json.JSONArray;
 import org.codehaus.jettison.json.JSONObject;
@@ -108,7 +109,7 @@ public class NamenodeHeartbeatService extends 
PeriodicService {
   @Override
   protected void serviceInit(Configuration configuration) throws Exception {
 
-this.conf = configuration;
+this.conf = DFSHAAdmin.addSecurityConfiguration(configuration);
 
 String nnDesc = nameserviceId;
 if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
index deb6ace..100313e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/SecurityConfUtil.java
@@ -14,8 +14,6 @@
 
 package org.apache.hadoop.fs.contract.router;
 
-import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION;
-import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_SERVICE_USER_NAME_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_CLIENT_HTTPS_KEYSTORE_RESOURCE_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_HTTPS_ADDRESS_KEY;
@@ -109,10 +107,6 @@ public final class SecurityConfUtil {
 spnegoPrincipal =
 SPNEGO_USER_NAME + "/" + krbInstance + "@" + kdc.getRealm();
 
-// Set auth configuration for mini DFS
-conf.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
-conf.set(HADOOP_SECURITY_SERVICE_USER_NAME_KEY, routerPrincipal);
-
 // Setup principles and keytabs for dfs
 conf.set(DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, routerPrincipal);
 conf.set(DFS_NAMENODE_KEYTAB_FILE_KEY, keytab);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 17/41: HDFS-13869. RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics. Contributed by Ranith Sardar.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ce9351ab83fc22db184850da57f219e646f1b0a9
Author: Yiqun Lin 
AuthorDate: Mon Dec 17 12:35:07 2018 +0800

HDFS-13869. RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics. 
Contributed by Ranith Sardar.
---
 .../federation/metrics/NamenodeBeanMetrics.java| 149 ++---
 .../hdfs/server/federation/router/Router.java  |   8 +-
 .../hdfs/server/federation/router/TestRouter.java  |  14 ++
 3 files changed, 147 insertions(+), 24 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
index 64df10c..25ec27c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
@@ -168,8 +168,12 @@ public class NamenodeBeanMetrics
 }
   }
 
-  private FederationMetrics getFederationMetrics() {
-return this.router.getMetrics();
+  private FederationMetrics getFederationMetrics() throws IOException {
+FederationMetrics metrics = getRouter().getMetrics();
+if (metrics == null) {
+  throw new IOException("Federated metrics is not initialized");
+}
+return metrics;
   }
 
   /
@@ -188,22 +192,42 @@ public class NamenodeBeanMetrics
 
   @Override
   public long getUsed() {
-return getFederationMetrics().getUsedCapacity();
+try {
+  return getFederationMetrics().getUsedCapacity();
+} catch (IOException e) {
+  LOG.debug("Failed to get the used capacity", e.getMessage());
+}
+return 0;
   }
 
   @Override
   public long getFree() {
-return getFederationMetrics().getRemainingCapacity();
+try {
+  return getFederationMetrics().getRemainingCapacity();
+} catch (IOException e) {
+  LOG.debug("Failed to get remaining capacity", e.getMessage());
+}
+return 0;
   }
 
   @Override
   public long getTotal() {
-return getFederationMetrics().getTotalCapacity();
+try {
+  return getFederationMetrics().getTotalCapacity();
+} catch (IOException e) {
+  LOG.debug("Failed to Get total capacity", e.getMessage());
+}
+return 0;
   }
 
   @Override
   public long getProvidedCapacity() {
-return getFederationMetrics().getProvidedSpace();
+try {
+  return getFederationMetrics().getProvidedSpace();
+} catch (IOException e) {
+  LOG.debug("Failed to get provided capacity", e.getMessage());
+}
+return 0;
   }
 
   @Override
@@ -261,39 +285,79 @@ public class NamenodeBeanMetrics
 
   @Override
   public long getTotalBlocks() {
-return getFederationMetrics().getNumBlocks();
+try {
+  return getFederationMetrics().getNumBlocks();
+} catch (IOException e) {
+  LOG.debug("Failed to get number of blocks", e.getMessage());
+}
+return 0;
   }
 
   @Override
   public long getNumberOfMissingBlocks() {
-return getFederationMetrics().getNumOfMissingBlocks();
+try {
+  return getFederationMetrics().getNumOfMissingBlocks();
+} catch (IOException e) {
+  LOG.debug("Failed to get number of missing blocks", e.getMessage());
+}
+return 0;
   }
 
   @Override
   @Deprecated
   public long getPendingReplicationBlocks() {
-return getFederationMetrics().getNumOfBlocksPendingReplication();
+try {
+  return getFederationMetrics().getNumOfBlocksPendingReplication();
+} catch (IOException e) {
+  LOG.debug("Failed to get number of blocks pending replica",
+  e.getMessage());
+}
+return 0;
   }
 
   @Override
   public long getPendingReconstructionBlocks() {
-return getFederationMetrics().getNumOfBlocksPendingReplication();
+try {
+  return getFederationMetrics().getNumOfBlocksPendingReplication();
+} catch (IOException e) {
+  LOG.debug("Failed to get number of blocks pending replica",
+  e.getMessage());
+}
+return 0;
   }
 
   @Override
   @Deprecated
   public long getUnderReplicatedBlocks() {
-return getFederationMetrics().getNumOfBlocksUnderReplicated();
+try {
+  return getFederationMetrics().getNumOfBlocksUnderReplicated();
+} catch (IOException e) {
+  LOG.debug("Failed to get number of blocks under replicated",
+  e.getMessage());
+}
+return 0;
   }
 
   @Override
   public long getLowRedundancyBlocks() {
-return getFederationMetrics().getNumOfBlocksUnderReplicated();
+try {
+  return getFederationMetrics().getNumOfBlocksUnderReplicated();
+} catch (IOException e) {

[hadoop] 10/41: HDFS-13776. RBF: Add Storage policies related ClientProtocol APIs. Contributed by Dibyendu Karmakar.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 53b69da61041afb6807f1a79b2f9fa4bd6901c38
Author: Brahma Reddy Battula 
AuthorDate: Thu Nov 22 00:34:08 2018 +0530

HDFS-13776. RBF: Add Storage policies related ClientProtocol APIs. 
Contributed by Dibyendu Karmakar.
---
 .../federation/router/RouterClientProtocol.java|  24 ++--
 .../federation/router/RouterStoragePolicy.java |  98 ++
 .../server/federation/MiniRouterDFSCluster.java|  13 ++
 .../server/federation/router/TestRouterRpc.java|  57 
 .../TestRouterRpcStoragePolicySatisfier.java   | 149 +
 5 files changed, 325 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 6c44362..81717ca 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -121,6 +121,8 @@ public class RouterClientProtocol implements ClientProtocol 
{
   private final String superGroup;
   /** Erasure coding calls. */
   private final ErasureCoding erasureCoding;
+  /** StoragePolicy calls. **/
+  private final RouterStoragePolicy storagePolicy;
 
   RouterClientProtocol(Configuration conf, RouterRpcServer rpcServer) {
 this.rpcServer = rpcServer;
@@ -138,6 +140,7 @@ public class RouterClientProtocol implements ClientProtocol 
{
 DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_KEY,
 DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT);
 this.erasureCoding = new ErasureCoding(rpcServer);
+this.storagePolicy = new RouterStoragePolicy(rpcServer);
   }
 
   @Override
@@ -272,22 +275,12 @@ public class RouterClientProtocol implements 
ClientProtocol {
   @Override
   public void setStoragePolicy(String src, String policyName)
   throws IOException {
-rpcServer.checkOperation(NameNode.OperationCategory.WRITE);
-
-List locations = rpcServer.getLocationsForPath(src, true);
-RemoteMethod method = new RemoteMethod("setStoragePolicy",
-new Class[] {String.class, String.class},
-new RemoteParam(), policyName);
-rpcClient.invokeSequential(locations, method, null, null);
+storagePolicy.setStoragePolicy(src, policyName);
   }
 
   @Override
   public BlockStoragePolicy[] getStoragePolicies() throws IOException {
-rpcServer.checkOperation(NameNode.OperationCategory.READ);
-
-RemoteMethod method = new RemoteMethod("getStoragePolicies");
-String ns = subclusterResolver.getDefaultNamespace();
-return (BlockStoragePolicy[]) rpcClient.invokeSingle(ns, method);
+return storagePolicy.getStoragePolicies();
   }
 
   @Override
@@ -1457,13 +1450,12 @@ public class RouterClientProtocol implements 
ClientProtocol {
 
   @Override
   public void unsetStoragePolicy(String src) throws IOException {
-rpcServer.checkOperation(NameNode.OperationCategory.WRITE, false);
+storagePolicy.unsetStoragePolicy(src);
   }
 
   @Override
   public BlockStoragePolicy getStoragePolicy(String path) throws IOException {
-rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
-return null;
+return storagePolicy.getStoragePolicy(path);
   }
 
   @Override
@@ -1551,7 +1543,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
 
   @Override
   public void satisfyStoragePolicy(String path) throws IOException {
-rpcServer.checkOperation(NameNode.OperationCategory.WRITE, false);
+storagePolicy.satisfyStoragePolicy(path);
   }
 
   @Override
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java
new file mode 100644
index 000..7145940
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStoragePolicy.java
@@ -0,0 +1,98 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, 

[hadoop] 29/41: HDFS-14156. RBF: rollEdit() command fails with Router. Contributed by Shubham Dewan.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 11210e76b2bc4538cd2d8432493b46050f809f69
Author: Inigo Goiri 
AuthorDate: Sat Jan 19 15:23:15 2019 -0800

HDFS-14156. RBF: rollEdit() command fails with Router. Contributed by 
Shubham Dewan.
---
 .../federation/router/RouterClientProtocol.java|   2 +-
 .../server/federation/router/RouterRpcClient.java  |   4 +-
 .../server/federation/router/TestRouterRpc.java|  27 +++
 .../federation/router/TestRouterRpcSingleNS.java   | 211 +
 4 files changed, 241 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index c41959e..09f7e5f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -869,7 +869,7 @@ public class RouterClientProtocol implements ClientProtocol 
{
 rpcServer.checkOperation(NameNode.OperationCategory.UNCHECKED);
 
 RemoteMethod method = new RemoteMethod("saveNamespace",
-new Class[] {Long.class, Long.class}, timeWindow, txGap);
+new Class[] {long.class, long.class}, timeWindow, txGap);
 final Set nss = namenodeResolver.getNamespaces();
 Map ret =
 rpcClient.invokeConcurrent(nss, method, true, false, boolean.class);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
index c4d3a20..0b15333 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
@@ -1045,7 +1045,7 @@ public class RouterRpcClient {
   Class proto = method.getProtocol();
   Object[] paramList = method.getParams(location);
   Object result = invokeMethod(ugi, namenodes, proto, m, paramList);
-  return Collections.singletonMap(location, clazz.cast(result));
+  return Collections.singletonMap(location, (R) result);
 }
 
 List orderedLocations = new LinkedList<>();
@@ -1103,7 +1103,7 @@ public class RouterRpcClient {
 try {
   Future future = futures.get(i);
   Object result = future.get();
-  results.put(location, clazz.cast(result));
+  results.put(location, (R) result);
 } catch (CancellationException ce) {
   T loc = orderedLocations.get(i);
   String msg = "Invocation to \"" + loc + "\" for \""
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
index 8632203..760d755 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
@@ -111,6 +111,8 @@ import com.google.common.collect.Maps;
 /**
  * The the RPC interface of the {@link Router} implemented by
  * {@link RouterRpcServer}.
+ * Tests covering the functionality of RouterRPCServer with
+ * multi nameServices.
  */
 public class TestRouterRpc {
 
@@ -1256,6 +1258,31 @@ public class TestRouterRpc {
   }
 
   @Test
+  public void testGetCurrentTXIDandRollEdits() throws IOException {
+Long rollEdits = routerProtocol.rollEdits();
+Long currentTXID = routerProtocol.getCurrentEditLogTxid();
+
+assertEquals(rollEdits, currentTXID);
+  }
+
+  @Test
+  public void testSaveNamespace() throws IOException {
+cluster.getCluster().getFileSystem(0)
+.setSafeMode(HdfsConstants.SafeModeAction.SAFEMODE_ENTER);
+cluster.getCluster().getFileSystem(1)
+.setSafeMode(HdfsConstants.SafeModeAction.SAFEMODE_ENTER);
+
+Boolean saveNamespace = routerProtocol.saveNamespace(0, 0);
+
+assertTrue(saveNamespace);
+
+cluster.getCluster().getFileSystem(0)
+.setSafeMode(HdfsConstants.SafeModeAction.SAFEMODE_LEAVE);
+cluster.getCluster().getFileSystem(1)
+.setSafeMode(HdfsConstants.SafeModeAction.SAFEMODE_LEAVE);
+  }
+
+  @Test
   public void testNamenodeMetrics() throws Exception {
 final NamenodeBeanMetrics metrics =
 

[hadoop] 30/41: HDFS-14209. RBF: setQuota() through router is working for only the mount Points under the Source column in MountTable. Contributed by Shubham Dewan.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 30a5fba34cd621dff0f6ec65f2263b6209dafec3
Author: Yiqun Lin 
AuthorDate: Wed Jan 23 22:59:43 2019 +0800

HDFS-14209. RBF: setQuota() through router is working for only the mount 
Points under the Source column in MountTable. Contributed by Shubham Dewan.
---
 .../hdfs/server/federation/router/Quota.java   |  7 -
 .../server/federation/router/TestRouterQuota.java  | 32 +-
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
index cfb538f..a6f5bab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
@@ -216,6 +216,11 @@ public class Quota {
 locations.addAll(rpcServer.getLocationsForPath(childPath, true, 
false));
   }
 }
-return locations;
+if (locations.size() >= 1) {
+  return locations;
+} else {
+  locations.addAll(rpcServer.getLocationsForPath(path, true, false));
+  return locations;
+}
   }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
index 656b401..034023c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterQuota.java
@@ -755,4 +755,34 @@ public class TestRouterQuota {
 assertEquals(HdfsConstants.QUOTA_RESET, subClusterQuota.getQuota());
 assertEquals(HdfsConstants.QUOTA_RESET, subClusterQuota.getSpaceQuota());
   }
-}
\ No newline at end of file
+
+  @Test
+  public void testSetQuotaNotMountTable() throws Exception {
+long nsQuota = 5;
+long ssQuota = 100;
+final FileSystem nnFs1 = nnContext1.getFileSystem();
+
+// setQuota should run for any directory
+MountTable mountTable1 = MountTable.newInstance("/setquotanmt",
+Collections.singletonMap("ns0", "/testdir16"));
+
+addMountTable(mountTable1);
+
+// Add a directory not present in mount table.
+nnFs1.mkdirs(new Path("/testdir16/testdir17"));
+
+routerContext.getRouter().getRpcServer().setQuota("/setquotanmt/testdir17",
+nsQuota, ssQuota, null);
+
+RouterQuotaUpdateService updateService = routerContext.getRouter()
+.getQuotaCacheUpdateService();
+// ensure setQuota RPC call was invoked
+updateService.periodicInvoke();
+
+ClientProtocol client1 = nnContext1.getClient().getNamenode();
+final QuotaUsage quota1 = client1.getQuotaUsage("/testdir16/testdir17");
+
+assertEquals(nsQuota, quota1.getQuota());
+assertEquals(ssQuota, quota1.getSpaceQuota());
+  }
+}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 41/41: HDFS-14249. RBF: Tooling to identify the subcluster location of a file. Contributed by Inigo Goiri.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f476bb1ee946d8d3ed1824d0858de2b1fef60b67
Author: Giovanni Matteo Fumarola 
AuthorDate: Wed Feb 20 11:08:55 2019 -0800

HDFS-14249. RBF: Tooling to identify the subcluster location of a file. 
Contributed by Inigo Goiri.
---
 .../RouterAdminProtocolServerSideTranslatorPB.java |  22 
 .../RouterAdminProtocolTranslatorPB.java   |  21 +++
 .../metrics/FederationRPCPerformanceMonitor.java   |   8 +-
 .../federation/resolver/MountTableManager.java |  12 ++
 .../federation/router/RouterAdminServer.java   |  36 ++
 .../federation/store/impl/MountTableStoreImpl.java |   7 +
 .../store/protocol/GetDestinationRequest.java  |  57 
 .../store/protocol/GetDestinationResponse.java |  59 +
 .../impl/pb/GetDestinationRequestPBImpl.java   |  73 +++
 .../impl/pb/GetDestinationResponsePBImpl.java  |  83 
 .../hadoop/hdfs/tools/federation/RouterAdmin.java  |  28 +++-
 .../src/main/proto/FederationProtocol.proto|   8 ++
 .../src/main/proto/RouterProtocol.proto|   5 +
 .../src/site/markdown/HDFSRouterFederation.md  |   4 +
 .../federation/router/TestRouterAdminCLI.java  |  64 -
 ...erRPCMultipleDestinationMountTableResolver.java | 144 +
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |   2 +
 17 files changed, 628 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
index a31c46d..6f6724e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
@@ -31,6 +31,8 @@ import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProt
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeRequestProto;
@@ -54,6 +56,8 @@ import 
org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeRequ
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesRequest;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeRequest;
@@ -76,6 +80,8 @@ import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnterSafe
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnterSafeModeResponsePBImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDisabledNameservicesRequestPBImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDisabledNameservicesResponsePBImpl;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationRequestPBImpl;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationResponsePBImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesRequestPBImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesResponsePBImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetSafeModeRequestPBImpl;
@@ -298,4 +304,20 @@ public 

[hadoop] 07/41: HDFS-13852. RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys. Contributed by yanghuafeng.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 9f362fa9b52ca3f64e0ce96c97ab9d947df43793
Author: Inigo Goiri 
AuthorDate: Tue Nov 13 10:14:35 2018 -0800

HDFS-13852. RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should 
be configured in RBFConfigKeys. Contributed by yanghuafeng.
---
 .../federation/metrics/FederationMetrics.java  | 12 ++--
 .../federation/metrics/NamenodeBeanMetrics.java| 22 --
 .../server/federation/router/RBFConfigKeys.java|  7 +++
 .../src/main/resources/hdfs-rbf-default.xml| 17 +
 .../router/TestRouterRPCClientRetries.java |  2 +-
 .../server/federation/router/TestRouterRpc.java|  2 +-
 6 files changed, 40 insertions(+), 22 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
index 23f62b6..6a0a46e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
@@ -47,12 +47,14 @@ import javax.management.NotCompliantMBeanException;
 import javax.management.ObjectName;
 import javax.management.StandardMBean;
 
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
 import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
 import org.apache.hadoop.hdfs.server.federation.router.Router;
 import org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer;
 import org.apache.hadoop.hdfs.server.federation.store.MembershipStore;
@@ -95,7 +97,7 @@ public class FederationMetrics implements FederationMBean {
   private static final String DATE_FORMAT = "/MM/dd HH:mm:ss";
 
   /** Prevent holding the page from load too long. */
-  private static final long TIME_OUT = TimeUnit.SECONDS.toMillis(1);
+  private final long timeOut;
 
 
   /** Router interface. */
@@ -143,6 +145,12 @@ public class FederationMetrics implements FederationMBean {
   this.routerStore = stateStore.getRegisteredRecordStore(
   RouterStore.class);
 }
+
+// Initialize the cache for the DN reports
+Configuration conf = router.getConfig();
+this.timeOut = conf.getTimeDuration(RBFConfigKeys.DN_REPORT_TIME_OUT,
+RBFConfigKeys.DN_REPORT_TIME_OUT_MS_DEFAULT, TimeUnit.MILLISECONDS);
+
   }
 
   /**
@@ -434,7 +442,7 @@ public class FederationMetrics implements FederationMBean {
 try {
   RouterRpcServer rpcServer = this.router.getRpcServer();
   DatanodeInfo[] live = rpcServer.getDatanodeReport(
-  DatanodeReportType.LIVE, false, TIME_OUT);
+  DatanodeReportType.LIVE, false, timeOut);
 
   if (live.length > 0) {
 float totalDfsUsed = 0;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
index 0ca5f73..64df10c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
@@ -74,21 +74,6 @@ public class NamenodeBeanMetrics
   private static final Logger LOG =
   LoggerFactory.getLogger(NamenodeBeanMetrics.class);
 
-  /** Prevent holding the page from loading too long. */
-  private static final String DN_REPORT_TIME_OUT =
-  RBFConfigKeys.FEDERATION_ROUTER_PREFIX + "dn-report.time-out";
-  /** We only wait for 1 second. */
-  private static final long DN_REPORT_TIME_OUT_DEFAULT =
-  TimeUnit.SECONDS.toMillis(1);
-
-  /** Time to cache the DN information. */
-  public static final String DN_REPORT_CACHE_EXPIRE =
-  RBFConfigKeys.FEDERATION_ROUTER_PREFIX + "dn-report.cache-expire";
-  /** We cache the DN information for 10 seconds by default. */
-  public static final long DN_REPORT_CACHE_EXPIRE_DEFAULT =
-  TimeUnit.SECONDS.toMillis(10);
-
-
   /** Instance of the Router being monitored. */
   private final Router router;
 
@@ 

[hadoop] 31/41: HDFS-14223. RBF: Add configuration documents for using multiple sub-clusters. Contributed by Takanobu Asanuma.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f40da42ea732cb18e3c50c539dece7168330d6f4
Author: Brahma Reddy Battula 
AuthorDate: Fri Jan 25 11:28:48 2019 +0530

HDFS-14223. RBF: Add configuration documents for using multiple 
sub-clusters. Contributed by Takanobu Asanuma.
---
 .../hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml| 3 ++-
 .../hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md  | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index 20ae778..afe3ad1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -275,7 +275,8 @@
 dfs.federation.router.file.resolver.client.class
 
org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver
 
-  Class to resolve files to subclusters.
+  Class to resolve files to subclusters. To enable multiple subclusters 
for a mount point,
+  set to 
org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver.
 
   
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index bcf8fa9..2ae0c2b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -404,7 +404,7 @@ Forwarding client requests to the right subcluster.
 
 | Property | Default | Description|
 |: |: |: |
-| dfs.federation.router.file.resolver.client.class | 
`org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver` | Class 
to resolve files to subclusters. |
+| dfs.federation.router.file.resolver.client.class | 
`org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver` | Class 
to resolve files to subclusters. To enable multiple subclusters for a mount 
point, set to 
org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver.
 |
 | dfs.federation.router.namenode.resolver.client.class | 
`org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver` 
| Class to resolve the namenode for a subcluster. |
 
 ### Namenode monitoring


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 35/41: HDFS-14225. RBF : MiniRouterDFSCluster should configure the failover proxy provider for namespace. Contributed by Ranith Sardar.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit bc8317f7dc84db57962d31ffc1b1bcb116e052a6
Author: Surendra Singh Lilhore 
AuthorDate: Tue Feb 5 10:03:04 2019 +0530

HDFS-14225. RBF : MiniRouterDFSCluster should configure the failover proxy 
provider for namespace. Contributed by Ranith Sardar.
---
 .../apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java   | 5 +
 1 file changed, 5 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
index 2df883c..f0bf271 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
@@ -78,6 +78,7 @@ import org.apache.hadoop.hdfs.MiniDFSCluster.NameNodeInfo;
 import org.apache.hadoop.hdfs.MiniDFSNNTopology;
 import org.apache.hadoop.hdfs.MiniDFSNNTopology.NNConf;
 import org.apache.hadoop.hdfs.MiniDFSNNTopology.NSConf;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeServiceState;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
@@ -87,6 +88,7 @@ import org.apache.hadoop.hdfs.server.federation.router.Router;
 import org.apache.hadoop.hdfs.server.federation.router.RouterClient;
 import org.apache.hadoop.hdfs.server.namenode.FSImage;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import 
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider;
 import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
 import org.apache.hadoop.http.HttpConfig;
 import org.apache.hadoop.net.NetUtils;
@@ -489,6 +491,9 @@ public class MiniRouterDFSCluster {
 "0.0.0.0");
 conf.set(DFS_NAMENODE_HTTPS_ADDRESS_KEY + "." + suffix,
 "127.0.0.1:" + context.httpsPort);
+conf.set(
+HdfsClientConfigKeys.Failover.PROXY_PROVIDER_KEY_PREFIX + "." + ns,
+ConfiguredFailoverProxyProvider.class.getName());
 
 // If the service port is enabled by default, we need to set them up
 boolean servicePortEnabled = false;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 03/41: HDFS-13845. RBF: The default MountTableResolver should fail resolving multi-destination paths. Contributed by yanghuafeng.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f61a816d35863f524d5d5885f9b8a4dd17daeb77
Author: Brahma Reddy Battula 
AuthorDate: Tue Oct 30 11:21:08 2018 +0530

HDFS-13845. RBF: The default MountTableResolver should fail resolving 
multi-destination paths. Contributed by yanghuafeng.
---
 .../federation/resolver/MountTableResolver.java| 15 ++--
 .../resolver/TestMountTableResolver.java   | 45 +-
 .../federation/router/TestDisableNameservices.java | 36 ++---
 3 files changed, 70 insertions(+), 26 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
index 121469f..9e69840 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
@@ -539,21 +539,28 @@ public class MountTableResolver
* @param entry Mount table entry.
* @return PathLocation containing the namespace, local path.
*/
-  private static PathLocation buildLocation(
-  final String path, final MountTable entry) {
-
+  private PathLocation buildLocation(
+  final String path, final MountTable entry) throws IOException {
 String srcPath = entry.getSourcePath();
 if (!path.startsWith(srcPath)) {
   LOG.error("Cannot build location, {} not a child of {}", path, srcPath);
   return null;
 }
+
+List dests = entry.getDestinations();
+if (getClass() == MountTableResolver.class && dests.size() > 1) {
+  throw new IOException("Cannnot build location, "
+  + getClass().getSimpleName()
+  + " should not resolve multiple destinations for " + path);
+}
+
 String remainingPath = path.substring(srcPath.length());
 if (remainingPath.startsWith(Path.SEPARATOR)) {
   remainingPath = remainingPath.substring(1);
 }
 
 List locations = new LinkedList<>();
-for (RemoteLocation oneDst : entry.getDestinations()) {
+for (RemoteLocation oneDst : dests) {
   String nsId = oneDst.getNameserviceId();
   String dest = oneDst.getDest();
   String newPath = dest;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
index 5e3b861..14ccb61 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
@@ -79,6 +79,8 @@ public class TestMountTableResolver {
* __usr
* bin -> 2:/bin
* __readonly -> 2:/tmp
+   * __multi -> 5:/dest1
+   *6:/dest2
*
* @throws IOException If it cannot set the mount table.
*/
@@ -126,6 +128,12 @@ public class TestMountTableResolver {
 MountTable readOnlyEntry = MountTable.newInstance("/readonly", map);
 readOnlyEntry.setReadOnly(true);
 mountTable.addEntry(readOnlyEntry);
+
+// /multi
+map = getMountTableEntry("5", "/dest1");
+map.put("6", "/dest2");
+MountTable multiEntry = MountTable.newInstance("/multi", map);
+mountTable.addEntry(multiEntry);
   }
 
   @Before
@@ -201,6 +209,17 @@ public class TestMountTableResolver {
 }
   }
 
+  @Test
+  public void testMuiltipleDestinations() throws IOException {
+try {
+  mountTable.getDestinationForPath("/multi");
+  fail("The getDestinationForPath call should fail.");
+} catch (IOException ioe) {
+  GenericTestUtils.assertExceptionContains(
+  "MountTableResolver should not resolve multiple destinations", ioe);
+}
+  }
+
   private void compareLists(List list1, String[] list2) {
 assertEquals(list1.size(), list2.length);
 for (String item : list2) {
@@ -236,8 +255,9 @@ public class TestMountTableResolver {
 
 // Check getting all mount points (virtual and real) beneath a path
 List mounts = mountTable.getMountPoints("/");
-assertEquals(4, mounts.size());
-compareLists(mounts, new String[] {"tmp", "user", "usr", "readonly"});
+assertEquals(5, mounts.size());
+compareLists(mounts, new String[] {"tmp", "user", "usr",
+"readonly", "multi"});
 
 mounts = mountTable.getMountPoints("/user");
 assertEquals(2, mounts.size());
@@ -263,6 +283,9 @@ public class TestMountTableResolver {
 
 mounts = 

[hadoop] 05/41: HDFS-12284. RBF: Support for Kerberos authentication. Contributed by Sherwood Zheng and Inigo Goiri.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 0367198276d273ea21923544b382bb531da70a14
Author: Brahma Reddy Battula 
AuthorDate: Wed Nov 7 07:33:37 2018 +0530

HDFS-12284. RBF: Support for Kerberos authentication. Contributed by 
Sherwood Zheng and Inigo Goiri.
---
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml|  10 ++
 .../server/federation/router/RBFConfigKeys.java|  11 ++
 .../hdfs/server/federation/router/Router.java  |  28 
 .../federation/router/RouterAdminServer.java   |   7 +
 .../server/federation/router/RouterHttpServer.java |   5 +-
 .../server/federation/router/RouterRpcClient.java  |   9 +-
 .../server/federation/router/RouterRpcServer.java  |  12 ++
 .../src/main/resources/hdfs-rbf-default.xml|  47 +++
 .../fs/contract/router/RouterHDFSContract.java |   9 +-
 .../fs/contract/router/SecurityConfUtil.java   | 156 +
 .../router/TestRouterHDFSContractAppendSecure.java |  46 ++
 .../router/TestRouterHDFSContractConcatSecure.java |  51 +++
 .../router/TestRouterHDFSContractCreateSecure.java |  48 +++
 .../router/TestRouterHDFSContractDeleteSecure.java |  46 ++
 .../TestRouterHDFSContractGetFileStatusSecure.java |  47 +++
 .../router/TestRouterHDFSContractMkdirSecure.java  |  48 +++
 .../router/TestRouterHDFSContractOpenSecure.java   |  47 +++
 .../router/TestRouterHDFSContractRenameSecure.java |  48 +++
 .../TestRouterHDFSContractRootDirectorySecure.java |  63 +
 .../router/TestRouterHDFSContractSeekSecure.java   |  48 +++
 .../TestRouterHDFSContractSetTimesSecure.java  |  48 +++
 .../server/federation/MiniRouterDFSCluster.java|  58 +++-
 22 files changed, 879 insertions(+), 13 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
index 6886f00..f38205a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
@@ -35,6 +35,16 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 
   
 
+  org.bouncycastle
+  bcprov-jdk16
+  test
+
+
+  org.apache.hadoop
+  hadoop-minikdc
+  test
+
+
   org.apache.hadoop
   hadoop-common
   provided
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index bbd4250..fa474f4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -242,4 +242,15 @@ public class RBFConfigKeys extends 
CommonConfigurationKeysPublic {
   FEDERATION_ROUTER_PREFIX + "quota-cache.update.interval";
   public static final long DFS_ROUTER_QUOTA_CACHE_UPATE_INTERVAL_DEFAULT =
   6;
+
+  // HDFS Router security
+  public static final String DFS_ROUTER_KEYTAB_FILE_KEY =
+  FEDERATION_ROUTER_PREFIX + "keytab.file";
+  public static final String DFS_ROUTER_KERBEROS_PRINCIPAL_KEY =
+  FEDERATION_ROUTER_PREFIX + "kerberos.principal";
+  public static final String DFS_ROUTER_KERBEROS_PRINCIPAL_HOSTNAME_KEY =
+  FEDERATION_ROUTER_PREFIX + "kerberos.principal.hostname";
+
+  public static final String DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY 
=
+  FEDERATION_ROUTER_PREFIX + "kerberos.internal.spnego.principal";
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
index 5ddc129..3288273 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
@@ -17,6 +17,10 @@
  */
 package org.apache.hadoop.hdfs.server.federation.router;
 
+import static 
org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KERBEROS_PRINCIPAL_HOSTNAME_KEY;
+import static 
org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KERBEROS_PRINCIPAL_KEY;
+import static 
org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_KEYTAB_FILE_KEY;
+
 import static 
org.apache.hadoop.hdfs.server.federation.router.FederationUtil.newActiveNamenodeResolver;
 import static 
org.apache.hadoop.hdfs.server.federation.router.FederationUtil.newFileSubclusterResolver;
 
@@ -41,6 +45,8 @@ import 
org.apache.hadoop.hdfs.server.federation.store.RouterStore;
 import 

[hadoop] 32/41: HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple destinations. Contributed by Ayush Saxena.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 2f928256feb88488f1c1010da7a682ed71cb4253
Author: Brahma Reddy Battula 
AuthorDate: Mon Jan 28 09:03:32 2019 +0530

HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() in case of 
multiple destinations. Contributed by Ayush Saxena.
---
 .../server/federation/router/RouterClientProtocol.java   |  7 +++
 .../federation/router/TestRouterRpcMultiDestination.java | 16 
 2 files changed, 23 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 09f7e5f..485c103 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -1629,6 +1629,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
 long quota = 0;
 long spaceConsumed = 0;
 long spaceQuota = 0;
+String ecPolicy = "";
 
 for (ContentSummary summary : summaries) {
   length += summary.getLength();
@@ -1637,6 +1638,11 @@ public class RouterClientProtocol implements 
ClientProtocol {
   quota += summary.getQuota();
   spaceConsumed += summary.getSpaceConsumed();
   spaceQuota += summary.getSpaceQuota();
+  // We return from the first response as we assume that the EC policy
+  // of each sub-cluster is same.
+  if (ecPolicy.isEmpty()) {
+ecPolicy = summary.getErasureCodingPolicy();
+  }
 }
 
 ContentSummary ret = new ContentSummary.Builder()
@@ -1646,6 +1652,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
 .quota(quota)
 .spaceConsumed(spaceConsumed)
 .spaceQuota(spaceQuota)
+.erasureCodingPolicy(ecPolicy)
 .build();
 return ret;
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
index 3101748..3d941bb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
@@ -41,6 +41,7 @@ import java.util.TreeSet;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.hdfs.protocol.DirectoryListing;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
@@ -230,6 +231,21 @@ public class TestRouterRpcMultiDestination extends 
TestRouterRpc {
   }
 
   @Test
+  public void testGetContentSummaryEc() throws Exception {
+DistributedFileSystem routerDFS =
+(DistributedFileSystem) getRouterFileSystem();
+Path dir = new Path("/");
+String expectedECPolicy = "RS-6-3-1024k";
+try {
+  routerDFS.setErasureCodingPolicy(dir, expectedECPolicy);
+  assertEquals(expectedECPolicy,
+  routerDFS.getContentSummary(dir).getErasureCodingPolicy());
+} finally {
+  routerDFS.unsetErasureCodingPolicy(dir);
+}
+  }
+
+  @Test
   public void testSubclusterDown() throws Exception {
 final int totalFiles = 6;
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 38/41: HDFS-13358. RBF: Support for Delegation Token (RPC). Contributed by CR Hota.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5f5ba94c80270c97e762da2cecf9e150cf7e4527
Author: Brahma Reddy Battula 
AuthorDate: Thu Feb 14 08:16:45 2019 +0530

HDFS-13358. RBF: Support for Delegation Token (RPC). Contributed by CR Hota.
---
 .../server/federation/router/RBFConfigKeys.java|   9 +
 .../federation/router/RouterClientProtocol.java|  16 +-
 .../server/federation/router/RouterRpcServer.java  |  21 +-
 .../router/security/RouterSecurityManager.java | 239 +
 .../federation/router/security/package-info.java   |  28 +++
 .../token/ZKDelegationTokenSecretManagerImpl.java  |  56 +
 .../router/security/token/package-info.java|  29 +++
 .../src/main/resources/hdfs-rbf-default.xml|  11 +-
 .../fs/contract/router/SecurityConfUtil.java   |   4 +
 .../TestRouterHDFSContractDelegationToken.java | 101 +
 .../security/MockDelegationTokenSecretManager.java |  52 +
 .../security/TestRouterSecurityManager.java|  93 
 12 files changed, 652 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index 5e907c8..657b6cf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -28,6 +28,8 @@ import 
org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
 import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
 import 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializerPBImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl;
+import 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager;
+import 
org.apache.hadoop.hdfs.server.federation.router.security.token.ZKDelegationTokenSecretManagerImpl;
 
 import java.util.concurrent.TimeUnit;
 
@@ -294,4 +296,11 @@ public class RBFConfigKeys extends 
CommonConfigurationKeysPublic {
 
   public static final String DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY 
=
   FEDERATION_ROUTER_PREFIX + "kerberos.internal.spnego.principal";
+
+  // HDFS Router secret manager for delegation token
+  public static final String DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS =
+  FEDERATION_ROUTER_PREFIX + "secret.manager.class";
+  public static final Class
+  DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS_DEFAULT =
+  ZKDelegationTokenSecretManagerImpl.class;
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index f20b4b6..5383a7d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -77,6 +77,7 @@ import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo
 import 
org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import 
org.apache.hadoop.hdfs.server.federation.router.security.RouterSecurityManager;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport;
@@ -124,6 +125,8 @@ public class RouterClientProtocol implements ClientProtocol 
{
   private final ErasureCoding erasureCoding;
   /** StoragePolicy calls. **/
   private final RouterStoragePolicy storagePolicy;
+  /** Router security manager to handle token operations. */
+  private RouterSecurityManager securityManager = null;
 
   RouterClientProtocol(Configuration conf, RouterRpcServer rpcServer) {
 this.rpcServer = rpcServer;
@@ -142,13 +145,14 @@ public class RouterClientProtocol implements 
ClientProtocol {
 DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT);
 this.erasureCoding = new ErasureCoding(rpcServer);
 this.storagePolicy = new RouterStoragePolicy(rpcServer);
+this.securityManager = rpcServer.getRouterSecurityManager();
   }
 
   @Override
   public Token getDelegationToken(Text renewer)
   throws IOException {
-

[hadoop] 25/41: HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo Goiri.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b6b8d14f99317c9a2eeb650db6651ed9e70f690a
Author: Yiqun Lin 
AuthorDate: Tue Jan 15 14:21:33 2019 +0800

HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo Goiri.
---
 .../hdfs/server/federation/router/Quota.java   |  6 ++--
 .../federation/router/RouterClientProtocol.java| 22 +++---
 .../federation/router/RouterQuotaManager.java  |  2 +-
 .../router/RouterQuotaUpdateService.java   |  6 ++--
 .../server/federation/router/RouterQuotaUsage.java | 35 --
 5 files changed, 38 insertions(+), 33 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
index 5d0309f..cfb538f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Quota.java
@@ -163,7 +163,7 @@ public class Quota {
 long ssCount = 0;
 long nsQuota = HdfsConstants.QUOTA_RESET;
 long ssQuota = HdfsConstants.QUOTA_RESET;
-boolean hasQuotaUnSet = false;
+boolean hasQuotaUnset = false;
 
 for (Map.Entry entry : results.entrySet()) {
   RemoteLocation loc = entry.getKey();
@@ -172,7 +172,7 @@ public class Quota {
 // If quota is not set in real FileSystem, the usage
 // value will return -1.
 if (usage.getQuota() == -1 && usage.getSpaceQuota() == -1) {
-  hasQuotaUnSet = true;
+  hasQuotaUnset = true;
 }
 nsQuota = usage.getQuota();
 ssQuota = usage.getSpaceQuota();
@@ -189,7 +189,7 @@ public class Quota {
 
 QuotaUsage.Builder builder = new QuotaUsage.Builder()
 .fileAndDirectoryCount(nsCount).spaceConsumed(ssCount);
-if (hasQuotaUnSet) {
+if (hasQuotaUnset) {
   builder.quota(HdfsConstants.QUOTA_RESET)
   .spaceQuota(HdfsConstants.QUOTA_RESET);
 } else {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 2089c57..c41959e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -20,7 +20,7 @@ package org.apache.hadoop.hdfs.server.federation.router;
 import static 
org.apache.hadoop.hdfs.server.federation.router.FederationUtil.updateMountPointStatus;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.crypto.CryptoProtocolVersion;
-import org.apache.hadoop.fs.BatchedRemoteIterator;
+import org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries;
 import org.apache.hadoop.fs.CacheFlag;
 import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.CreateFlag;
@@ -1141,7 +1141,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries 
listCacheDirectives(
+  public BatchedEntries listCacheDirectives(
   long prevId, CacheDirectiveInfo filter) throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
 return null;
@@ -1163,7 +1163,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries 
listCachePools(String prevKey)
+  public BatchedEntries listCachePools(String prevKey)
   throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
 return null;
@@ -1274,7 +1274,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries 
listEncryptionZones(long prevId)
+  public BatchedEntries listEncryptionZones(long prevId)
   throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
 return null;
@@ -1287,7 +1287,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
   }
 
   @Override
-  public BatchedRemoteIterator.BatchedEntries 
listReencryptionStatus(
+  public BatchedEntries listReencryptionStatus(
   long prevId) throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
 return null;
@@ -1523,15 +1523,17 @@ public class RouterClientProtocol implements 
ClientProtocol {
 
   @Deprecated
   @Override
-  public BatchedRemoteIterator.BatchedEntries 
listOpenFiles(long prevId)

[hadoop] 14/41: Revert "HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui."

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f659c2784a7d8b4c423756f42f2e3505d0ba83ea
Author: Yiqun Lin 
AuthorDate: Tue Dec 4 22:16:00 2018 +0800

Revert "HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. 
Contributed by Fei Hui."

This reverts commit 7c0d6f65fde12ead91ed7c706521ad1d3dc995f8.
---
 .../federation/router/ConnectionManager.java   | 20 -
 .../server/federation/router/ConnectionPool.java   | 14 +-
 .../server/federation/router/RBFConfigKeys.java|  5 ---
 .../src/main/resources/hdfs-rbf-default.xml|  8 
 .../federation/router/TestConnectionManager.java   | 51 +++---
 5 files changed, 15 insertions(+), 83 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index 745..fa2bf94 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -49,6 +49,10 @@ public class ConnectionManager {
   private static final Logger LOG =
   LoggerFactory.getLogger(ConnectionManager.class);
 
+  /** Minimum amount of active connections: 50%. */
+  protected static final float MIN_ACTIVE_RATIO = 0.5f;
+
+
   /** Configuration for the connection manager, pool and sockets. */
   private final Configuration conf;
 
@@ -56,8 +60,6 @@ public class ConnectionManager {
   private final int minSize = 1;
   /** Max number of connections per user + nn. */
   private final int maxSize;
-  /** Min ratio of active connections per user + nn. */
-  private final float minActiveRatio;
 
   /** How often we close a pool for a particular user + nn. */
   private final long poolCleanupPeriodMs;
@@ -94,13 +96,10 @@ public class ConnectionManager {
   public ConnectionManager(Configuration config) {
 this.conf = config;
 
-// Configure minimum, maximum and active connection pools
+// Configure minimum and maximum connection pools
 this.maxSize = this.conf.getInt(
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE,
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT);
-this.minActiveRatio = this.conf.getFloat(
-RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO,
-RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT);
 
 // Map with the connections indexed by UGI and Namenode
 this.pools = new HashMap<>();
@@ -204,8 +203,7 @@ public class ConnectionManager {
 pool = this.pools.get(connectionId);
 if (pool == null) {
   pool = new ConnectionPool(
-  this.conf, nnAddress, ugi, this.minSize, this.maxSize,
-  this.minActiveRatio, protocol);
+  this.conf, nnAddress, ugi, this.minSize, this.maxSize, protocol);
   this.pools.put(connectionId, pool);
 }
   } finally {
@@ -328,9 +326,8 @@ public class ConnectionManager {
   long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
   int total = pool.getNumConnections();
   int active = pool.getNumActiveConnections();
-  float poolMinActiveRatio = pool.getMinActiveRatio();
   if (timeSinceLastActive > connectionCleanupPeriodMs ||
-  active < poolMinActiveRatio * total) {
+  active < MIN_ACTIVE_RATIO * total) {
 // Remove and close 1 connection
 List conns = pool.removeConnections(1);
 for (ConnectionContext conn : conns) {
@@ -415,9 +412,8 @@ public class ConnectionManager {
   try {
 int total = pool.getNumConnections();
 int active = pool.getNumActiveConnections();
-float poolMinActiveRatio = pool.getMinActiveRatio();
 if (pool.getNumConnections() < pool.getMaxSize() &&
-active >= poolMinActiveRatio * total) {
+active >= MIN_ACTIVE_RATIO * total) {
   ConnectionContext conn = pool.newConnection();
   pool.addConnection(conn);
 } else {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
index f868521..fab3b81 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
@@ -91,8 +91,6 @@ public class 

[hadoop] 15/41: HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f945456092884d51d5e7efe020193641399f3a29
Author: Yiqun Lin 
AuthorDate: Wed Dec 5 11:44:38 2018 +0800

HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by 
Fei Hui.
---
 .../federation/router/ConnectionManager.java   | 20 
 .../server/federation/router/ConnectionPool.java   | 14 +-
 .../server/federation/router/RBFConfigKeys.java|  5 ++
 .../src/main/resources/hdfs-rbf-default.xml|  8 
 .../federation/router/TestConnectionManager.java   | 55 ++
 5 files changed, 85 insertions(+), 17 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index fa2bf94..745 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -49,10 +49,6 @@ public class ConnectionManager {
   private static final Logger LOG =
   LoggerFactory.getLogger(ConnectionManager.class);
 
-  /** Minimum amount of active connections: 50%. */
-  protected static final float MIN_ACTIVE_RATIO = 0.5f;
-
-
   /** Configuration for the connection manager, pool and sockets. */
   private final Configuration conf;
 
@@ -60,6 +56,8 @@ public class ConnectionManager {
   private final int minSize = 1;
   /** Max number of connections per user + nn. */
   private final int maxSize;
+  /** Min ratio of active connections per user + nn. */
+  private final float minActiveRatio;
 
   /** How often we close a pool for a particular user + nn. */
   private final long poolCleanupPeriodMs;
@@ -96,10 +94,13 @@ public class ConnectionManager {
   public ConnectionManager(Configuration config) {
 this.conf = config;
 
-// Configure minimum and maximum connection pools
+// Configure minimum, maximum and active connection pools
 this.maxSize = this.conf.getInt(
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE,
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT);
+this.minActiveRatio = this.conf.getFloat(
+RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO,
+RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT);
 
 // Map with the connections indexed by UGI and Namenode
 this.pools = new HashMap<>();
@@ -203,7 +204,8 @@ public class ConnectionManager {
 pool = this.pools.get(connectionId);
 if (pool == null) {
   pool = new ConnectionPool(
-  this.conf, nnAddress, ugi, this.minSize, this.maxSize, protocol);
+  this.conf, nnAddress, ugi, this.minSize, this.maxSize,
+  this.minActiveRatio, protocol);
   this.pools.put(connectionId, pool);
 }
   } finally {
@@ -326,8 +328,9 @@ public class ConnectionManager {
   long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
   int total = pool.getNumConnections();
   int active = pool.getNumActiveConnections();
+  float poolMinActiveRatio = pool.getMinActiveRatio();
   if (timeSinceLastActive > connectionCleanupPeriodMs ||
-  active < MIN_ACTIVE_RATIO * total) {
+  active < poolMinActiveRatio * total) {
 // Remove and close 1 connection
 List conns = pool.removeConnections(1);
 for (ConnectionContext conn : conns) {
@@ -412,8 +415,9 @@ public class ConnectionManager {
   try {
 int total = pool.getNumConnections();
 int active = pool.getNumActiveConnections();
+float poolMinActiveRatio = pool.getMinActiveRatio();
 if (pool.getNumConnections() < pool.getMaxSize() &&
-active >= MIN_ACTIVE_RATIO * total) {
+active >= poolMinActiveRatio * total) {
   ConnectionContext conn = pool.newConnection();
   pool.addConnection(conn);
 } else {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
index fab3b81..f868521 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
@@ -91,6 +91,8 @@ public class ConnectionPool {
   private final int minSize;
   /** Max number of connections per 

[hadoop] 01/41: HDFS-13906. RBF: Add multiple paths for dfsrouteradmin 'rm' and 'clrquota' commands. Contributed by Ayush Saxena.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 8bc2ad2d874fc0e6a681aa587518cb699f3f7b75
Author: Vinayakumar B 
AuthorDate: Fri Oct 12 17:19:55 2018 +0530

HDFS-13906. RBF: Add multiple paths for dfsrouteradmin 'rm' and 'clrquota' 
commands. Contributed by Ayush Saxena.
---
 .../hadoop/hdfs/tools/federation/RouterAdmin.java  | 102 +++--
 .../federation/router/TestRouterAdminCLI.java  |  82 ++---
 2 files changed, 122 insertions(+), 62 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index 1aefe4f..4a9cc7a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -151,17 +151,7 @@ public class RouterAdmin extends Configured implements 
Tool {
* @param arg List of of command line parameters.
*/
   private void validateMax(String[] arg) {
-if (arg[0].equals("-rm")) {
-  if (arg.length > 2) {
-throw new IllegalArgumentException(
-"Too many arguments, Max=1 argument allowed");
-  }
-} else if (arg[0].equals("-ls")) {
-  if (arg.length > 2) {
-throw new IllegalArgumentException(
-"Too many arguments, Max=1 argument allowed");
-  }
-} else if (arg[0].equals("-clrQuota")) {
+if (arg[0].equals("-ls")) {
   if (arg.length > 2) {
 throw new IllegalArgumentException(
 "Too many arguments, Max=1 argument allowed");
@@ -183,63 +173,63 @@ public class RouterAdmin extends Configured implements 
Tool {
 }
   }
 
-  @Override
-  public int run(String[] argv) throws Exception {
-if (argv.length < 1) {
-  System.err.println("Not enough parameters specified");
-  printUsage();
-  return -1;
-}
-
-int exitCode = -1;
-int i = 0;
-String cmd = argv[i++];
-
-// Verify that we have enough command line parameters
+  /**
+   * Usage: validates the minimum number of arguments for a command.
+   * @param argv List of of command line parameters.
+   * @return true if number of arguments are valid for the command else false.
+   */
+  private boolean validateMin(String[] argv) {
+String cmd = argv[0];
 if ("-add".equals(cmd)) {
   if (argv.length < 4) {
-System.err.println("Not enough parameters specified for cmd " + cmd);
-printUsage(cmd);
-return exitCode;
+return false;
   }
 } else if ("-update".equals(cmd)) {
   if (argv.length < 4) {
-System.err.println("Not enough parameters specified for cmd " + cmd);
-printUsage(cmd);
-return exitCode;
+return false;
   }
 } else if ("-rm".equals(cmd)) {
   if (argv.length < 2) {
-System.err.println("Not enough parameters specified for cmd " + cmd);
-printUsage(cmd);
-return exitCode;
+return false;
   }
 } else if ("-setQuota".equals(cmd)) {
   if (argv.length < 4) {
-System.err.println("Not enough parameters specified for cmd " + cmd);
-printUsage(cmd);
-return exitCode;
+return false;
   }
 } else if ("-clrQuota".equals(cmd)) {
   if (argv.length < 2) {
-System.err.println("Not enough parameters specified for cmd " + cmd);
-printUsage(cmd);
-return exitCode;
+return false;
   }
 } else if ("-safemode".equals(cmd)) {
   if (argv.length < 2) {
-System.err.println("Not enough parameters specified for cmd " + cmd);
-printUsage(cmd);
-return exitCode;
+return false;
   }
 } else if ("-nameservice".equals(cmd)) {
   if (argv.length < 3) {
-System.err.println("Not enough parameters specificed for cmd " + cmd);
-printUsage(cmd);
-return exitCode;
+return false;
   }
 }
+return true;
+  }
+
+  @Override
+  public int run(String[] argv) throws Exception {
+if (argv.length < 1) {
+  System.err.println("Not enough parameters specified");
+  printUsage();
+  return -1;
+}
+
+int exitCode = -1;
+int i = 0;
+String cmd = argv[i++];
 
+// Verify that we have enough command line parameters
+if (!validateMin(argv)) {
+  System.err.println("Not enough parameters specificed for cmd " + cmd);
+  printUsage(cmd);
+  return exitCode;
+}
 // Initialize RouterClient
 try {
   String address = getConf().getTrimmed(
@@ -273,8 +263,17 @@ public class RouterAdmin extends Configured implements 
Tool {
   exitCode = -1;
 }
   } else if 

[hadoop] 13/41: HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by Fei Hui.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 0ffeac3ae0c37acd1679ab91335a0746081b5cb7
Author: Yiqun Lin 
AuthorDate: Tue Dec 4 19:58:38 2018 +0800

HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. Contributed by 
Fei Hui.
---
 .../federation/router/ConnectionManager.java   | 20 +
 .../server/federation/router/ConnectionPool.java   | 14 +-
 .../server/federation/router/RBFConfigKeys.java|  5 +++
 .../src/main/resources/hdfs-rbf-default.xml|  8 
 .../federation/router/TestConnectionManager.java   | 51 +++---
 5 files changed, 83 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index fa2bf94..745 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -49,10 +49,6 @@ public class ConnectionManager {
   private static final Logger LOG =
   LoggerFactory.getLogger(ConnectionManager.class);
 
-  /** Minimum amount of active connections: 50%. */
-  protected static final float MIN_ACTIVE_RATIO = 0.5f;
-
-
   /** Configuration for the connection manager, pool and sockets. */
   private final Configuration conf;
 
@@ -60,6 +56,8 @@ public class ConnectionManager {
   private final int minSize = 1;
   /** Max number of connections per user + nn. */
   private final int maxSize;
+  /** Min ratio of active connections per user + nn. */
+  private final float minActiveRatio;
 
   /** How often we close a pool for a particular user + nn. */
   private final long poolCleanupPeriodMs;
@@ -96,10 +94,13 @@ public class ConnectionManager {
   public ConnectionManager(Configuration config) {
 this.conf = config;
 
-// Configure minimum and maximum connection pools
+// Configure minimum, maximum and active connection pools
 this.maxSize = this.conf.getInt(
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE,
 RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT);
+this.minActiveRatio = this.conf.getFloat(
+RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO,
+RBFConfigKeys.DFS_ROUTER_NAMENODE_CONNECTION_MIN_ACTIVE_RATIO_DEFAULT);
 
 // Map with the connections indexed by UGI and Namenode
 this.pools = new HashMap<>();
@@ -203,7 +204,8 @@ public class ConnectionManager {
 pool = this.pools.get(connectionId);
 if (pool == null) {
   pool = new ConnectionPool(
-  this.conf, nnAddress, ugi, this.minSize, this.maxSize, protocol);
+  this.conf, nnAddress, ugi, this.minSize, this.maxSize,
+  this.minActiveRatio, protocol);
   this.pools.put(connectionId, pool);
 }
   } finally {
@@ -326,8 +328,9 @@ public class ConnectionManager {
   long timeSinceLastActive = Time.now() - pool.getLastActiveTime();
   int total = pool.getNumConnections();
   int active = pool.getNumActiveConnections();
+  float poolMinActiveRatio = pool.getMinActiveRatio();
   if (timeSinceLastActive > connectionCleanupPeriodMs ||
-  active < MIN_ACTIVE_RATIO * total) {
+  active < poolMinActiveRatio * total) {
 // Remove and close 1 connection
 List conns = pool.removeConnections(1);
 for (ConnectionContext conn : conns) {
@@ -412,8 +415,9 @@ public class ConnectionManager {
   try {
 int total = pool.getNumConnections();
 int active = pool.getNumActiveConnections();
+float poolMinActiveRatio = pool.getMinActiveRatio();
 if (pool.getNumConnections() < pool.getMaxSize() &&
-active >= MIN_ACTIVE_RATIO * total) {
+active >= poolMinActiveRatio * total) {
   ConnectionContext conn = pool.newConnection();
   pool.addConnection(conn);
 } else {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
index fab3b81..f868521 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionPool.java
@@ -91,6 +91,8 @@ public class ConnectionPool {
   private final int minSize;
   /** Max number of connections per 

[hadoop] 22/41: HDFS-14150. RBF: Quotas of the sub-cluster should be removed when removing the mount point. Contributed by Takanobu Asanuma.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit c74f7e12ac1a77e07920e89c4ec8fef93a42885d
Author: Yiqun Lin 
AuthorDate: Wed Jan 9 17:18:43 2019 +0800

HDFS-14150. RBF: Quotas of the sub-cluster should be removed when removing 
the mount point. Contributed by Takanobu Asanuma.
---
 .../federation/router/RouterAdminServer.java   | 23 +++
 .../src/main/resources/hdfs-rbf-default.xml|  4 +-
 .../src/site/markdown/HDFSRouterFederation.md  |  4 +-
 .../server/federation/router/TestRouterQuota.java  | 48 +-
 4 files changed, 67 insertions(+), 12 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index 5bb7751..18c19e0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -250,23 +250,25 @@ public class RouterAdminServer extends AbstractService
 
 MountTable mountTable = request.getEntry();
 if (mountTable != null && router.isQuotaEnabled()) {
-  synchronizeQuota(mountTable);
+  synchronizeQuota(mountTable.getSourcePath(),
+  mountTable.getQuota().getQuota(),
+  mountTable.getQuota().getSpaceQuota());
 }
 return response;
   }
 
   /**
* Synchronize the quota value across mount table and subclusters.
-   * @param mountTable Quota set in given mount table.
+   * @param path Source path in given mount table.
+   * @param nsQuota Name quota definition in given mount table.
+   * @param ssQuota Space quota definition in given mount table.
* @throws IOException
*/
-  private void synchronizeQuota(MountTable mountTable) throws IOException {
-String path = mountTable.getSourcePath();
-long nsQuota = mountTable.getQuota().getQuota();
-long ssQuota = mountTable.getQuota().getSpaceQuota();
-
-if (nsQuota != HdfsConstants.QUOTA_DONT_SET
-|| ssQuota != HdfsConstants.QUOTA_DONT_SET) {
+  private void synchronizeQuota(String path, long nsQuota, long ssQuota)
+  throws IOException {
+if (router.isQuotaEnabled() &&
+(nsQuota != HdfsConstants.QUOTA_DONT_SET
+|| ssQuota != HdfsConstants.QUOTA_DONT_SET)) {
   HdfsFileStatus ret = this.router.getRpcServer().getFileInfo(path);
   if (ret != null) {
 this.router.getRpcServer().getQuotaModule().setQuota(path, nsQuota,
@@ -278,6 +280,9 @@ public class RouterAdminServer extends AbstractService
   @Override
   public RemoveMountTableEntryResponse removeMountTableEntry(
   RemoveMountTableEntryRequest request) throws IOException {
+// clear sub-cluster's quota definition
+synchronizeQuota(request.getSrcPath(), HdfsConstants.QUOTA_RESET,
+HdfsConstants.QUOTA_RESET);
 return getMountTableStore().removeMountTableEntry(request);
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index 72f6c2f..20ae778 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -447,7 +447,9 @@
 dfs.federation.router.quota.enable
 false
 
-  Set to true to enable quota system in Router.
+  Set to true to enable quota system in Router. When it's enabled, setting
+  or clearing sub-cluster's quota directly is not recommended since Router
+  Admin server will override sub-cluster's quota with global quota.
 
   
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index adc4383..959cd63 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -143,6 +143,8 @@ For performance reasons, the Router caches the quota usage 
and updates it period
 will be used for quota-verification during each WRITE RPC call invoked in 
RouterRPCSever. See [HDFS Quotas Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html)
 for the quota detail.
 
+Note: When global quota is enabled, setting or clearing sub-cluster's quota 
directly is not recommended since Router Admin server will override 
sub-cluster's quota with global quota.
+
 ### State Store
 The (logically centralized, but physically distributed) State Store maintains:
 
@@ -421,7 +423,7 @@ Global quota supported in federation.
 
 | 

[hadoop] 20/41: HDFS-14167. RBF: Add stale nodes to federation metrics. Contributed by Inigo Goiri.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 3b971fe4d1e63fbf262be403a9df93e771b19c44
Author: Inigo Goiri 
AuthorDate: Wed Jan 2 10:38:33 2019 -0800

HDFS-14167. RBF: Add stale nodes to federation metrics. Contributed by 
Inigo Goiri.
---
 .../server/federation/metrics/FederationMBean.java |  6 ++
 .../server/federation/metrics/FederationMetrics.java   |  6 ++
 .../server/federation/metrics/NamenodeBeanMetrics.java |  7 ++-
 .../resolver/MembershipNamenodeResolver.java   |  1 +
 .../federation/resolver/NamenodeStatusReport.java  | 18 +++---
 .../federation/router/NamenodeHeartbeatService.java|  1 +
 .../federation/store/records/MembershipStats.java  |  4 
 .../store/records/impl/pb/MembershipStatsPBImpl.java   | 10 ++
 .../src/main/proto/FederationProtocol.proto|  1 +
 .../federation/metrics/TestFederationMetrics.java  |  7 +++
 .../federation/store/records/TestMembershipState.java  |  3 +++
 11 files changed, 60 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
index 79fb3e4..b37f5ef 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
@@ -107,6 +107,12 @@ public interface FederationMBean {
   int getNumDeadNodes();
 
   /**
+   * Get the number of stale datanodes.
+   * @return Number of stale datanodes.
+   */
+  int getNumStaleNodes();
+
+  /**
* Get the number of decommissioning datanodes.
* @return Number of decommissioning datanodes.
*/
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
index 6a0a46e..b3fe6cc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
@@ -414,6 +414,12 @@ public class FederationMetrics implements FederationMBean {
   }
 
   @Override
+  public int getNumStaleNodes() {
+return getNameserviceAggregatedInt(
+MembershipStats::getNumOfStaleDatanodes);
+  }
+
+  @Override
   public int getNumDecommissioningNodes() {
 return getNameserviceAggregatedInt(
 MembershipStats::getNumOfDecommissioningDatanodes);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
index 25ec27c..5e95606 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
@@ -626,7 +626,12 @@ public class NamenodeBeanMetrics
 
   @Override
   public int getNumStaleDataNodes() {
-return -1;
+try {
+  return getFederationMetrics().getNumStaleNodes();
+} catch (IOException e) {
+  LOG.debug("Failed to get number of stale nodes", e.getMessage());
+}
+return 0;
   }
 
   @Override
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
index 2707304..178db1b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
@@ -280,6 +280,7 @@ public class MembershipNamenodeResolver
   report.getNumDecommissioningDatanodes());
   stats.setNumOfActiveDatanodes(report.getNumLiveDatanodes());
   stats.setNumOfDeadDatanodes(report.getNumDeadDatanodes());
+  stats.setNumOfStaleDatanodes(report.getNumStaleDatanodes());
   stats.setNumOfDecomActiveDatanodes(report.getNumDecomLiveDatanodes());
   stats.setNumOfDecomDeadDatanodes(report.getNumDecomDeadDatanodes());
   record.setStats(stats);
diff --git 

[hadoop] 02/41: HDFS-14011. RBF: Add more information to HdfsFileStatus for a mount point. Contributed by Akira Ajisaka.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit dca3b2edf2ac77019c9d6c7d76ca35f2f451327c
Author: Yiqun Lin 
AuthorDate: Tue Oct 23 14:34:29 2018 +0800

HDFS-14011. RBF: Add more information to HdfsFileStatus for a mount point. 
Contributed by Akira Ajisaka.
---
 .../resolver/FileSubclusterResolver.java   |  6 ++-
 .../federation/router/RouterClientProtocol.java| 30 +---
 .../router/RouterQuotaUpdateService.java   |  9 ++--
 .../hdfs/server/federation/MockResolver.java   | 17 +++
 .../federation/router/TestRouterMountTable.java| 55 +-
 .../router/TestRouterRpcMultiDestination.java  |  5 +-
 6 files changed, 97 insertions(+), 25 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
index 5aa5ec9..6432bb0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FileSubclusterResolver.java
@@ -61,8 +61,10 @@ public interface FileSubclusterResolver {
* cache.
*
* @param path Path to get the mount points under.
-   * @return List of mount points present at this path or zero-length list if
-   * none are found.
+   * @return List of mount points present at this path. Return zero-length
+   * list if the path is a mount point but there are no mount points
+   * under the path. Return null if the path is not a mount point
+   * and there are no mount points under the path.
* @throws IOException Throws exception if the data is not available.
*/
   List getMountPoints(String path) throws IOException;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 344401f..9e2979b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -720,6 +720,9 @@ public class RouterClientProtocol implements ClientProtocol 
{
   date = dates.get(src);
 }
 ret = getMountPointStatus(src, children.size(), date);
+  } else if (children != null) {
+// The src is a mount point, but there are no files or directories
+ret = getMountPointStatus(src, 0, 0);
   }
 }
 
@@ -1728,13 +1731,26 @@ public class RouterClientProtocol implements 
ClientProtocol {
 FsPermission permission = FsPermission.getDirDefault();
 String owner = this.superUser;
 String group = this.superGroup;
-try {
-  // TODO support users, it should be the user for the pointed folder
-  UserGroupInformation ugi = RouterRpcServer.getRemoteUser();
-  owner = ugi.getUserName();
-  group = ugi.getPrimaryGroupName();
-} catch (IOException e) {
-  LOG.error("Cannot get the remote user: {}", e.getMessage());
+if (subclusterResolver instanceof MountTableResolver) {
+  try {
+MountTableResolver mountTable = (MountTableResolver) 
subclusterResolver;
+MountTable entry = mountTable.getMountPoint(name);
+if (entry != null) {
+  permission = entry.getMode();
+  owner = entry.getOwnerName();
+  group = entry.getGroupName();
+}
+  } catch (IOException e) {
+LOG.error("Cannot get mount point: {}", e.getMessage());
+  }
+} else {
+  try {
+UserGroupInformation ugi = RouterRpcServer.getRemoteUser();
+owner = ugi.getUserName();
+group = ugi.getPrimaryGroupName();
+  } catch (IOException e) {
+LOG.error("Cannot get remote user: {}", e.getMessage());
+  }
 }
 long inodeId = 0;
 return new HdfsFileStatus.Builder()
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
index 4813b53..9bfd705 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterQuotaUpdateService.java
@@ 

[hadoop] 28/41: HDFS-14193. RBF: Inconsistency with the Default Namespace. Contributed by Ayush Saxena.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 6c9c040688cc4f69a036bbfe91a5c54fe72dc98d
Author: Vinayakumar B 
AuthorDate: Wed Jan 16 18:06:17 2019 +0530

HDFS-14193. RBF: Inconsistency with the Default Namespace. Contributed by 
Ayush Saxena.
---
 .../federation/resolver/MountTableResolver.java| 27 --
 .../resolver/TestInitializeMountTableResolver.java | 32 +++---
 2 files changed, 16 insertions(+), 43 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
index 9e69840..da58551 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
@@ -17,8 +17,6 @@
  */
 package org.apache.hadoop.hdfs.server.federation.resolver;
 
-import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_NAMESERVICES;
-import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMESERVICE_ID;
 import static 
org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE;
 import static 
org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE;
 import static 
org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE_DEFAULT;
@@ -50,8 +48,6 @@ import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hdfs.DFSUtil;
-import org.apache.hadoop.hdfs.DFSUtilClient;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.order.DestinationOrder;
 import org.apache.hadoop.hdfs.server.federation.router.Router;
 import org.apache.hadoop.hdfs.server.federation.store.MountTableStore;
@@ -163,33 +159,22 @@ public class MountTableResolver
* @param conf Configuration for this resolver.
*/
   private void initDefaultNameService(Configuration conf) {
-this.defaultNameService = conf.get(
-DFS_ROUTER_DEFAULT_NAMESERVICE,
-DFSUtil.getNamenodeNameServiceId(conf));
-
 this.defaultNSEnable = conf.getBoolean(
 DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE,
 DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE_DEFAULT);
 
-if (defaultNameService == null) {
-  LOG.warn(
-  "{} and {} is not set. Fallback to {} as the default name service.",
-  DFS_ROUTER_DEFAULT_NAMESERVICE, DFS_NAMESERVICE_ID, 
DFS_NAMESERVICES);
-  Collection nsIds = DFSUtilClient.getNameServiceIds(conf);
-  if (nsIds.isEmpty()) {
-this.defaultNameService = "";
-  } else {
-this.defaultNameService = nsIds.iterator().next();
-  }
+if (!this.defaultNSEnable) {
+  LOG.warn("Default name service is disabled.");
+  return;
 }
+this.defaultNameService = conf.get(DFS_ROUTER_DEFAULT_NAMESERVICE, "");
 
 if (this.defaultNameService.equals("")) {
   this.defaultNSEnable = false;
   LOG.warn("Default name service is not set.");
 } else {
-  String enable = this.defaultNSEnable ? "enabled" : "disabled";
-  LOG.info("Default name service: {}, {} to read or write",
-  this.defaultNameService, enable);
+  LOG.info("Default name service: {}, enabled to read or write",
+  this.defaultNameService);
 }
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java
index 5db7531..8a22ade 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java
@@ -23,7 +23,9 @@ import org.junit.Test;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMESERVICE_ID;
 import static 
org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_NAMESERVICES;
 import static 
org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE;
+import static 
org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE_ENABLE;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
 
 /**
  * Test {@link MountTableResolver} initialization.
@@ -43,40 +45,26 @@ public 

[hadoop] 33/41: HDFS-14215. RBF: Remove dependency on availability of default namespace. Contributed by Ayush Saxena.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 71f20661fc51ecae1103a8e9b35b254a672fd419
Author: Inigo Goiri 
AuthorDate: Mon Jan 28 10:04:24 2019 -0800

HDFS-14215. RBF: Remove dependency on availability of default namespace. 
Contributed by Ayush Saxena.
---
 .../federation/router/RouterClientProtocol.java|   3 +-
 .../federation/router/RouterNamenodeProtocol.java  |  20 +---
 .../server/federation/router/RouterRpcServer.java  |  23 +
 .../federation/router/RouterStoragePolicy.java |   7 +-
 .../hdfs/server/federation/MockResolver.java   |  12 +++
 .../server/federation/router/TestRouterRpc.java| 109 ++---
 6 files changed, 139 insertions(+), 35 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 485c103..f20b4b6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -195,8 +195,7 @@ public class RouterClientProtocol implements ClientProtocol 
{
 rpcServer.checkOperation(NameNode.OperationCategory.READ);
 
 RemoteMethod method = new RemoteMethod("getServerDefaults");
-String ns = subclusterResolver.getDefaultNamespace();
-return (FsServerDefaults) rpcClient.invokeSingle(ns, method);
+return rpcServer.invokeAtAvailableNs(method, FsServerDefaults.class);
   }
 
   @Override
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java
index bf0db6e..c6b0209 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterNamenodeProtocol.java
@@ -24,7 +24,6 @@ import java.util.Map.Entry;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
 import org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
-import 
org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
 import org.apache.hadoop.hdfs.server.namenode.CheckpointSignature;
 import org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory;
 import org.apache.hadoop.hdfs.server.protocol.BlocksWithLocations;
@@ -45,14 +44,11 @@ public class RouterNamenodeProtocol implements 
NamenodeProtocol {
   private final RouterRpcServer rpcServer;
   /** RPC clients to connect to the Namenodes. */
   private final RouterRpcClient rpcClient;
-  /** Interface to map global name space to HDFS subcluster name spaces. */
-  private final FileSubclusterResolver subclusterResolver;
 
 
   public RouterNamenodeProtocol(RouterRpcServer server) {
 this.rpcServer = server;
 this.rpcClient =  this.rpcServer.getRPCClient();
-this.subclusterResolver = this.rpcServer.getSubclusterResolver();
   }
 
   @Override
@@ -94,33 +90,27 @@ public class RouterNamenodeProtocol implements 
NamenodeProtocol {
   public ExportedBlockKeys getBlockKeys() throws IOException {
 rpcServer.checkOperation(OperationCategory.READ);
 
-// We return the information from the default name space
-String defaultNsId = subclusterResolver.getDefaultNamespace();
 RemoteMethod method =
 new RemoteMethod(NamenodeProtocol.class, "getBlockKeys");
-return rpcClient.invokeSingle(defaultNsId, method, 
ExportedBlockKeys.class);
+return rpcServer.invokeAtAvailableNs(method, ExportedBlockKeys.class);
   }
 
   @Override
   public long getTransactionID() throws IOException {
 rpcServer.checkOperation(OperationCategory.READ);
 
-// We return the information from the default name space
-String defaultNsId = subclusterResolver.getDefaultNamespace();
 RemoteMethod method =
 new RemoteMethod(NamenodeProtocol.class, "getTransactionID");
-return rpcClient.invokeSingle(defaultNsId, method, long.class);
+return rpcServer.invokeAtAvailableNs(method, long.class);
   }
 
   @Override
   public long getMostRecentCheckpointTxId() throws IOException {
 rpcServer.checkOperation(OperationCategory.READ);
 
-// We return the information from the default name space
-String defaultNsId = subclusterResolver.getDefaultNamespace();
 RemoteMethod method =
 new RemoteMethod(NamenodeProtocol.class, 

[hadoop] 27/41: HDFS-14129. addendum to HDFS-14129. Contributed by Ranith Sardar.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d747fb10b11e787d72bac313919b43e7b4e3d241
Author: Surendra Singh Lilhore 
AuthorDate: Wed Jan 16 11:42:17 2019 +0530

HDFS-14129. addendum to HDFS-14129. Contributed by Ranith Sardar.
---
 .../hdfs/protocolPB/RouterAdminProtocol.java   |  34 +++
 .../hdfs/protocolPB/RouterPolicyProvider.java  |  52 ++
 .../router/TestRouterPolicyProvider.java   | 108 +
 3 files changed, 194 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocol.java
new file mode 100644
index 000..d885989
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocol.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocolPB;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
+import org.apache.hadoop.hdfs.server.federation.router.NameserviceManager;
+import org.apache.hadoop.hdfs.server.federation.router.RouterStateManager;
+import org.apache.hadoop.ipc.GenericRefreshProtocol;
+
+/**
+ * Protocol used by routeradmin to communicate with statestore.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Stable
+public interface RouterAdminProtocol extends MountTableManager,
+RouterStateManager, NameserviceManager, GenericRefreshProtocol {
+}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterPolicyProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterPolicyProvider.java
new file mode 100644
index 000..af391ff
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterPolicyProvider.java
@@ -0,0 +1,52 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocolPB;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.hdfs.HDFSPolicyProvider;
+import org.apache.hadoop.security.authorize.Service;
+
+/**
+ * {@link HDFSPolicyProvider} for RBF protocols.
+ */
+@InterfaceAudience.Private
+public class RouterPolicyProvider extends HDFSPolicyProvider {
+
+  private static final Service[] RBF_SERVICES = new Service[] {
+  new Service(CommonConfigurationKeys.SECURITY_ROUTER_ADMIN_PROTOCOL_ACL,
+  RouterAdminProtocol.class) };
+
+  private final Service[] services;
+
+  public RouterPolicyProvider() {
+List list = new ArrayList<>();
+list.addAll(Arrays.asList(super.getServices()));
+list.addAll(Arrays.asList(RBF_SERVICES));
+services = list.toArray(new Service[list.size()]);
+  }
+
+  @Override
+  public Service[] getServices() {
+return Arrays.copyOf(services, services.length);
+  }
+}
\ No newline at end of file
diff --git 

[hadoop] 16/41: HDFS-14152. RBF: Fix a typo in RouterAdmin usage. Contributed by Ayush Saxena.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5640958138e1d5badf754dcb1c35ffd6ac43ae20
Author: Takanobu Asanuma 
AuthorDate: Sun Dec 16 00:40:51 2018 +0900

HDFS-14152. RBF: Fix a typo in RouterAdmin usage. Contributed by Ayush 
Saxena.
---
 .../main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java  | 2 +-
 .../apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index 4a9cc7a..bdaabe8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -109,7 +109,7 @@ public class RouterAdmin extends Configured implements Tool 
{
   {"-add", "-update", "-rm", "-ls", "-setQuota", "-clrQuota",
   "-safemode", "-nameservice", "-getDisabledNameservices"};
   StringBuilder usage = new StringBuilder();
-  usage.append("Usage: hdfs routeradmin :\n");
+  usage.append("Usage: hdfs dfsrouteradmin :\n");
   for (int i = 0; i < commands.length; i++) {
 usage.append(getUsage(commands[i]));
 if (i + 1 < commands.length) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
index 6642942..d0e3e50 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
@@ -549,7 +549,7 @@ public class TestRouterAdminCLI {
 
 argv = new String[] {"-Random"};
 assertEquals(-1, ToolRunner.run(admin, argv));
-String expected = "Usage: hdfs routeradmin :\n"
+String expected = "Usage: hdfs dfsrouteradmin :\n"
 + "\t[-add"
 + "[-readonly] [-order HASH|LOCAL|RANDOM|HASH_ALL] "
 + "-owner  -group  -mode ]\n"


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 24/41: HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin -refreshRouterArgs command. Contributed by yanghuafeng.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a73cfffa8eaf2e1a8418a1f2efed9b7d6ce5f59c
Author: Inigo Goiri 
AuthorDate: Fri Jan 11 10:11:18 2019 -0800

HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin 
-refreshRouterArgs command. Contributed by yanghuafeng.
---
 .../federation/router/RouterAdminServer.java   |  26 ++-
 .../hadoop/hdfs/tools/federation/RouterAdmin.java  |  72 ++
 .../src/site/markdown/HDFSRouterFederation.md  |   6 +
 .../router/TestRouterAdminGenericRefresh.java  | 252 +
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |   2 +
 5 files changed, 357 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index 18c19e0..027dd11 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -23,12 +23,14 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY;
 
 import java.io.IOException;
 import java.net.InetSocketAddress;
+import java.util.Collection;
 import java.util.Set;
 
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.HDFSPolicyProvider;
+import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import 
org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos.RouterAdminProtocolService;
@@ -64,9 +66,15 @@ import 
org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableE
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import org.apache.hadoop.ipc.GenericRefreshProtocol;
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.ipc.RPC.Server;
+import org.apache.hadoop.ipc.RefreshRegistry;
+import org.apache.hadoop.ipc.RefreshResponse;
+import org.apache.hadoop.ipc.proto.GenericRefreshProtocolProtos;
+import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolPB;
+import 
org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolServerSideTranslatorPB;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.service.AbstractService;
@@ -81,7 +89,8 @@ import com.google.protobuf.BlockingService;
  * router. It is created, started, and stopped by {@link Router}.
  */
 public class RouterAdminServer extends AbstractService
-implements MountTableManager, RouterStateManager, NameserviceManager {
+implements MountTableManager, RouterStateManager, NameserviceManager,
+GenericRefreshProtocol {
 
   private static final Logger LOG =
   LoggerFactory.getLogger(RouterAdminServer.class);
@@ -160,6 +169,15 @@ public class RouterAdminServer extends AbstractService
 router.setAdminServerAddress(this.adminAddress);
 iStateStoreCache =
 router.getSubclusterResolver() instanceof StateStoreCache;
+
+GenericRefreshProtocolServerSideTranslatorPB genericRefreshXlator =
+new GenericRefreshProtocolServerSideTranslatorPB(this);
+BlockingService genericRefreshService =
+GenericRefreshProtocolProtos.GenericRefreshProtocolService.
+newReflectiveBlockingService(genericRefreshXlator);
+
+DFSUtil.addPBProtocol(conf, GenericRefreshProtocolPB.class,
+genericRefreshService, adminServer);
   }
 
   /**
@@ -487,4 +505,10 @@ public class RouterAdminServer extends AbstractService
   public static String getSuperGroup(){
 return superGroup;
   }
+
+  @Override // GenericRefreshProtocol
+  public Collection refresh(String identifier, String[] args) 
{
+// Let the registry handle as needed
+return RefreshRegistry.defaultRegistry().dispatch(identifier, args);
+  }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index 27c42cd..37aad88 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
@@ -19,6 

[hadoop] 21/41: HDFS-14161. RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection. Contributed by Fei Hui.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 6e770ff428666f5bcd7dd25f2672558bf6b65426
Author: Inigo Goiri 
AuthorDate: Wed Jan 2 10:49:00 2019 -0800

HDFS-14161. RBF: Throw StandbyException instead of IOException so that 
client can retry when can not get connection. Contributed by Fei Hui.
---
 .../federation/router/ConnectionNullException.java | 33 ++
 .../server/federation/router/RouterRpcClient.java  | 20 ---
 .../server/federation/FederationTestUtils.java | 31 +
 .../router/TestRouterClientRejectOverload.java | 40 ++
 4 files changed, 120 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionNullException.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionNullException.java
new file mode 100644
index 000..53de602
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionNullException.java
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+
+
+/**
+ * Exception when can not get a non-null connection.
+ */
+public class ConnectionNullException extends IOException {
+
+  private static final long serialVersionUID = 1L;
+
+  public ConnectionNullException(String msg) {
+super(msg);
+  }
+}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
index a21e980..c4d3a20 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
@@ -270,7 +270,8 @@ public class RouterRpcClient {
 }
 
 if (connection == null) {
-  throw new IOException("Cannot get a connection to " + rpcAddress);
+  throw new ConnectionNullException("Cannot get a connection to "
+  + rpcAddress);
 }
 return connection;
   }
@@ -363,9 +364,9 @@ public class RouterRpcClient {
 Map ioes = new LinkedHashMap<>();
 for (FederationNamenodeContext namenode : namenodes) {
   ConnectionContext connection = null;
+  String nsId = namenode.getNameserviceId();
+  String rpcAddress = namenode.getRpcAddress();
   try {
-String nsId = namenode.getNameserviceId();
-String rpcAddress = namenode.getRpcAddress();
 connection = this.getConnection(ugi, nsId, rpcAddress, protocol);
 ProxyAndInfo client = connection.getClient();
 final Object proxy = client.getProxy();
@@ -394,6 +395,16 @@ public class RouterRpcClient {
   }
   // RemoteException returned by NN
   throw (RemoteException) ioe;
+} else if (ioe instanceof ConnectionNullException) {
+  if (this.rpcMonitor != null) {
+this.rpcMonitor.proxyOpFailureCommunicate();
+  }
+  LOG.error("Get connection for {} {} error: {}", nsId, rpcAddress,
+  ioe.getMessage());
+  // Throw StandbyException so that client can retry
+  StandbyException se = new StandbyException(ioe.getMessage());
+  se.initCause(ioe);
+  throw se;
 } else {
   // Other communication error, this is a failure
   // Communication retries are handled by the retry policy
@@ -425,7 +436,8 @@ public class RouterRpcClient {
   String addr = namenode.getRpcAddress();
   IOException ioe = entry.getValue();
   if (ioe instanceof StandbyException) {
-LOG.error("{} {} at {} is in Standby", nsId, nnId, addr);
+LOG.error("{} {} at {} is in Standby: {}", nsId, nnId, addr,
+ioe.getMessage());
 

[hadoop] 08/41: HDFS-13834. RBF: Connection creator thread should catch Throwable. Contributed by CR Hota.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 8fe8161805843f6e6de13343d41b404d34217657
Author: Inigo Goiri 
AuthorDate: Wed Nov 14 18:35:12 2018 +0530

HDFS-13834. RBF: Connection creator thread should catch Throwable. 
Contributed by CR Hota.
---
 .../federation/router/ConnectionManager.java   |  4 +-
 .../federation/router/TestConnectionManager.java   | 43 ++
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
index 9fb83e4..fa2bf94 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ConnectionManager.java
@@ -393,7 +393,7 @@ public class ConnectionManager {
   /**
* Thread that creates connections asynchronously.
*/
-  private static class ConnectionCreator extends Thread {
+  static class ConnectionCreator extends Thread {
 /** If the creator is running. */
 private boolean running = true;
 /** Queue to push work to. */
@@ -426,6 +426,8 @@ public class ConnectionManager {
 } catch (InterruptedException e) {
   LOG.error("The connection creator was interrupted");
   this.running = false;
+} catch (Throwable e) {
+  LOG.error("Fatal error caught by connection creator ", e);
 }
   }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
index 0e1eb40..765f6c8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java
@@ -22,12 +22,17 @@ import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.Rule;
+import org.junit.rules.ExpectedException;
 
 import java.io.IOException;
 import java.util.Map;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -49,6 +54,7 @@ public class TestConnectionManager {
   private static final UserGroupInformation TEST_USER3 =
   UserGroupInformation.createUserForTesting("user3", TEST_GROUP);
   private static final String TEST_NN_ADDRESS = "nn1:8080";
+  private static final String UNRESOLVED_TEST_NN_ADDRESS = "unknownhost:8080";
 
   @Before
   public void setup() throws Exception {
@@ -59,6 +65,9 @@ public class TestConnectionManager {
 connManager.start();
   }
 
+  @Rule
+  public ExpectedException exceptionRule = ExpectedException.none();
+
   @After
   public void shutdown() {
 if (connManager != null) {
@@ -122,6 +131,40 @@ public class TestConnectionManager {
   }
 
   @Test
+  public void testConnectionCreatorWithException() throws Exception {
+// Create a bad connection pool pointing to unresolvable namenode address.
+ConnectionPool badPool = new ConnectionPool(
+conf, UNRESOLVED_TEST_NN_ADDRESS, TEST_USER1, 0, 10,
+ClientProtocol.class);
+BlockingQueue queue = new ArrayBlockingQueue<>(1);
+queue.add(badPool);
+ConnectionManager.ConnectionCreator connectionCreator =
+new ConnectionManager.ConnectionCreator(queue);
+connectionCreator.setDaemon(true);
+connectionCreator.start();
+// Wait to make sure async thread is scheduled and picks
+GenericTestUtils.waitFor(()->queue.isEmpty(), 50, 5000);
+// At this point connection creation task should be definitely picked up.
+assertTrue(queue.isEmpty());
+// At this point connection thread should still be alive.
+assertTrue(connectionCreator.isAlive());
+// Stop the thread as test is successful at this point
+connectionCreator.interrupt();
+  }
+
+  @Test
+  public void testGetConnectionWithException() throws Exception {
+String exceptionCause = "java.net.UnknownHostException: unknownhost";
+exceptionRule.expect(IllegalArgumentException.class);
+exceptionRule.expectMessage(exceptionCause);

[hadoop] 34/41: HDFS-13404. RBF: TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ec52346bbe675adab122f7a4d5ace14747f5d32c
Author: Takanobu Asanuma 
AuthorDate: Tue Feb 5 06:06:05 2019 +0900

HDFS-13404. RBF: 
TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails.
---
 .../org/apache/hadoop/fs/contract/AbstractContractAppendTest.java   | 6 ++
 1 file changed, 6 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
index d61b635..02a8996 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractAppendTest.java
@@ -133,6 +133,12 @@ public abstract class AbstractContractAppendTest extends 
AbstractFSContractTestB
 assertPathExists("original file does not exist", target);
 byte[] dataset = dataset(256, 'a', 'z');
 FSDataOutputStream outputStream = getFileSystem().append(target);
+if (isSupported(CREATE_VISIBILITY_DELAYED)) {
+  // Some filesystems like WebHDFS doesn't assure sequential consistency.
+  // In such a case, delay is needed. Given that we can not check the lease
+  // because here is closed in client side package, simply add a sleep.
+  Thread.sleep(10);
+}
 outputStream.write(dataset);
 Path renamed = new Path(testPath, "renamed");
 rename(target, renamed);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 18/41: HDFS-14151. RBF: Make the read-only column of Mount Table clearly understandable.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 640fe0789f08afb6030c4c8940ae01a2599f22f3
Author: Takanobu Asanuma 
AuthorDate: Tue Dec 18 19:47:36 2018 +0900

HDFS-14151. RBF: Make the read-only column of Mount Table clearly 
understandable.
---
 .../hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html | 2 +-
 .../hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js   | 1 +
 .../hadoop-hdfs-rbf/src/main/webapps/static/rbf.css   | 8 +---
 3 files changed, 3 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
index 068988c..0f089fe 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.html
@@ -408,7 +408,7 @@
   {nameserviceId}
   {path}
   {order}
-  
+  
   {ownerName}
   {groupName}
   {mode}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
index 6311a80..bb8e057 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
@@ -317,6 +317,7 @@
 for (var i = 0, e = mountTable.length; i < e; ++i) {
   if (mountTable[i].readonly == true) {
 mountTable[i].readonly = "true"
+mountTable[i].status = "Read only"
   } else {
 mountTable[i].readonly = "false"
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/static/rbf.css 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/static/rbf.css
index 43112af..5cdd826 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/static/rbf.css
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/static/rbf.css
@@ -132,12 +132,6 @@
 }
 
 .mount-table-read-only-true:before {
-color: #c7254e;
-content: "\e033";
-}
-
-.mount-table-read-only-false:before {
 color: #5fa341;
-content: "\e013";
+content: "\e033";
 }
-


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 12/41: HDFS-14085. RBF: LS command for root shows wrong owner and permission information. Contributed by Ayush Saxena.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 16b8f759a1eafba5654767b3344e5b1a4740d851
Author: Surendra Singh Lilhore 
AuthorDate: Tue Dec 4 12:23:56 2018 +0530

HDFS-14085. RBF: LS command for root shows wrong owner and permission 
information. Contributed by Ayush Saxena.
---
 .../server/federation/router/FederationUtil.java   |  23 +-
 .../federation/router/RouterClientProtocol.java|  29 +-
 .../federation/router/TestRouterMountTable.java| 307 -
 3 files changed, 278 insertions(+), 81 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java
index f8c7a9b..f0d9168 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java
@@ -27,6 +27,7 @@ import java.net.URLConnection;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
 import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
@@ -205,4 +206,24 @@ public final class FederationUtil {
 return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
 || parent.equals(Path.SEPARATOR);
   }
-}
+
+  /**
+   * Add the the number of children for an existing HdfsFileStatus object.
+   * @param dirStatus HdfsfileStatus object.
+   * @param children number of children to be added.
+   * @return HdfsFileStatus with the number of children specified.
+   */
+  public static HdfsFileStatus updateMountPointStatus(HdfsFileStatus dirStatus,
+  int children) {
+return new HdfsFileStatus.Builder().atime(dirStatus.getAccessTime())
+.blocksize(dirStatus.getBlockSize()).children(children)
+.ecPolicy(dirStatus.getErasureCodingPolicy())
+
.feInfo(dirStatus.getFileEncryptionInfo()).fileId(dirStatus.getFileId())
+.group(dirStatus.getGroup()).isdir(dirStatus.isDir())
+.length(dirStatus.getLen()).mtime(dirStatus.getModificationTime())
+.owner(dirStatus.getOwner()).path(dirStatus.getLocalNameInBytes())
+
.perm(dirStatus.getPermission()).replication(dirStatus.getReplication())
+.storagePolicy(dirStatus.getStoragePolicy())
+.symlink(dirStatus.getSymlinkInBytes()).build();
+  }
+}
\ No newline at end of file
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 81717ca..2089c57 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs.server.federation.router;
 
+import static 
org.apache.hadoop.hdfs.server.federation.router.FederationUtil.updateMountPointStatus;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.crypto.CryptoProtocolVersion;
 import org.apache.hadoop.fs.BatchedRemoteIterator;
@@ -669,7 +670,6 @@ public class RouterClientProtocol implements ClientProtocol 
{
 if (dates != null && dates.containsKey(child)) {
   date = dates.get(child);
 }
-// TODO add number of children
 HdfsFileStatus dirStatus = getMountPointStatus(child, 0, date);
 
 // This may overwrite existing listing entries with the mount point
@@ -1663,12 +1663,13 @@ public class RouterClientProtocol implements 
ClientProtocol {
 // Get the file info from everybody
 Map results =
 rpcClient.invokeConcurrent(locations, method, HdfsFileStatus.class);
-
+int children=0;
 // We return the first file
 HdfsFileStatus dirStatus = null;
 for (RemoteLocation loc : locations) {
   HdfsFileStatus fileStatus = results.get(loc);
   if (fileStatus != null) {
+children += fileStatus.getChildrenNum();
 if (!fileStatus.isDirectory()) {
   return fileStatus;
 } else if (dirStatus == null) {
@@ -1676,7 +1677,10 @@ public class RouterClientProtocol implements 
ClientProtocol {
 }
   }
 }
-return dirStatus;
+if (dirStatus != null) {
+  return 

[hadoop] 36/41: HDFS-14252. RBF : Exceptions are exposing the actual sub cluster path. Contributed by Ayush Saxena.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d815bc7ce05834c398f12527f1cb5f4d4113f8da
Author: Giovanni Matteo Fumarola 
AuthorDate: Tue Feb 5 10:40:28 2019 -0800

HDFS-14252. RBF : Exceptions are exposing the actual sub cluster path. 
Contributed by Ayush Saxena.
---
 .../server/federation/router/RouterRpcClient.java  | 13 ---
 .../federation/router/TestRouterMountTable.java| 41 ++
 2 files changed, 36 insertions(+), 18 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
index 0b15333..f5985ee 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
@@ -1042,10 +1042,15 @@ public class RouterRpcClient {
   String ns = location.getNameserviceId();
   final List namenodes =
   getNamenodesForNameservice(ns);
-  Class proto = method.getProtocol();
-  Object[] paramList = method.getParams(location);
-  Object result = invokeMethod(ugi, namenodes, proto, m, paramList);
-  return Collections.singletonMap(location, (R) result);
+  try {
+Class proto = method.getProtocol();
+Object[] paramList = method.getParams(location);
+Object result = invokeMethod(ugi, namenodes, proto, m, paramList);
+return Collections.singletonMap(location, (R) result);
+  } catch (IOException ioe) {
+// Localize the exception
+throw processException(ioe, location);
+  }
 }
 
 List orderedLocations = new LinkedList<>();
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
index 9538d71..4f6f702 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java
@@ -21,6 +21,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.util.Collections;
 import java.util.HashMap;
@@ -43,12 +44,14 @@ import 
org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
 import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.order.DestinationOrder;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.apache.hadoop.util.Time;
 import org.junit.After;
 import org.junit.AfterClass;
@@ -69,6 +72,7 @@ public class TestRouterMountTable {
   private static long startTime;
   private static FileSystem nnFs0;
   private static FileSystem nnFs1;
+  private static FileSystem routerFs;
 
   @BeforeClass
   public static void globalSetUp() throws Exception {
@@ -92,6 +96,7 @@ public class TestRouterMountTable {
 nnFs0 = nnContext0.getFileSystem();
 nnFs1 = nnContext1.getFileSystem();
 routerContext = cluster.getRandomRouter();
+routerFs = routerContext.getFileSystem();
 Router router = routerContext.getRouter();
 routerProtocol = routerContext.getClient().getNamenode();
 mountTable = (MountTableResolver) router.getSubclusterResolver();
@@ -136,7 +141,6 @@ public class TestRouterMountTable {
 assertTrue(addMountTable(regularEntry));
 
 // Create a folder which should show in all locations
-final FileSystem routerFs = routerContext.getFileSystem();
 assertTrue(routerFs.mkdirs(new Path("/regular/newdir")));
 
 FileStatus dirStatusNn =
@@ -261,7 +265,7 @@ public class TestRouterMountTable {
 

[hadoop] 39/41: HDFS-14226. RBF: Setting attributes should set on all subclusters' directories. Contributed by Ayush Saxena.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 1645df92999a2940c19cde4cdddfb12c93cf9e84
Author: Inigo Goiri 
AuthorDate: Fri Feb 15 09:25:09 2019 -0800

HDFS-14226. RBF: Setting attributes should set on all subclusters' 
directories. Contributed by Ayush Saxena.
---
 .../server/federation/router/ErasureCoding.java|  12 +-
 .../federation/router/RouterClientProtocol.java|  55 ++-
 .../server/federation/router/RouterRpcServer.java  |  46 ++-
 .../federation/router/RouterStoragePolicy.java |  12 +-
 ...erRPCMultipleDestinationMountTableResolver.java | 394 +
 5 files changed, 482 insertions(+), 37 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
index 480b232..f4584b1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
@@ -157,7 +157,11 @@ public class ErasureCoding {
 RemoteMethod remoteMethod = new RemoteMethod("setErasureCodingPolicy",
 new Class[] {String.class, String.class},
 new RemoteParam(), ecPolicyName);
-rpcClient.invokeSequential(locations, remoteMethod, null, null);
+if (rpcServer.isInvokeConcurrent(src)) {
+  rpcClient.invokeConcurrent(locations, remoteMethod);
+} else {
+  rpcClient.invokeSequential(locations, remoteMethod);
+}
   }
 
   public void unsetErasureCodingPolicy(String src) throws IOException {
@@ -167,7 +171,11 @@ public class ErasureCoding {
 rpcServer.getLocationsForPath(src, true);
 RemoteMethod remoteMethod = new RemoteMethod("unsetErasureCodingPolicy",
 new Class[] {String.class}, new RemoteParam());
-rpcClient.invokeSequential(locations, remoteMethod, null, null);
+if (rpcServer.isInvokeConcurrent(src)) {
+  rpcClient.invokeConcurrent(locations, remoteMethod);
+} else {
+  rpcClient.invokeSequential(locations, remoteMethod);
+}
   }
 
   public ECBlockGroupStats getECBlockGroupStats() throws IOException {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 5383a7d..6cc12ca 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -213,7 +213,7 @@ public class RouterClientProtocol implements ClientProtocol 
{
   throws IOException {
 rpcServer.checkOperation(NameNode.OperationCategory.WRITE);
 
-if (createParent && isPathAll(src)) {
+if (createParent && rpcServer.isPathAll(src)) {
   int index = src.lastIndexOf(Path.SEPARATOR);
   String parent = src.substring(0, index);
   LOG.debug("Creating {} requires creating parent {}", src, parent);
@@ -273,9 +273,13 @@ public class RouterClientProtocol implements 
ClientProtocol {
 RemoteMethod method = new RemoteMethod("setReplication",
 new Class[] {String.class, short.class}, new RemoteParam(),
 replication);
-Object result = rpcClient.invokeSequential(
-locations, method, Boolean.class, Boolean.TRUE);
-return (boolean) result;
+if (rpcServer.isInvokeConcurrent(src)) {
+  return !rpcClient.invokeConcurrent(locations, method, Boolean.class)
+  .containsValue(false);
+} else {
+  return rpcClient.invokeSequential(locations, method, Boolean.class,
+  Boolean.TRUE);
+}
   }
 
   @Override
@@ -299,7 +303,7 @@ public class RouterClientProtocol implements ClientProtocol 
{
 RemoteMethod method = new RemoteMethod("setPermission",
 new Class[] {String.class, FsPermission.class},
 new RemoteParam(), permissions);
-if (isPathAll(src)) {
+if (rpcServer.isInvokeConcurrent(src)) {
   rpcClient.invokeConcurrent(locations, method);
 } else {
   rpcClient.invokeSequential(locations, method);
@@ -316,7 +320,7 @@ public class RouterClientProtocol implements ClientProtocol 
{
 RemoteMethod method = new RemoteMethod("setOwner",
 new Class[] {String.class, String.class, String.class},
 new RemoteParam(), username, groupname);
-if (isPathAll(src)) {
+if (rpcServer.isInvokeConcurrent(src)) {
   rpcClient.invokeConcurrent(locations, method);
 } else {
   

[hadoop] 23/41: HDFS-14191. RBF: Remove hard coded router status from FederationMetrics. Contributed by Ranith Sardar.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit c30d4d9f915b6aebf609bf42121586e73f34c4e5
Author: Surendra Singh Lilhore 
AuthorDate: Thu Jan 10 16:18:23 2019 +0530

HDFS-14191. RBF: Remove hard coded router status from FederationMetrics. 
Contributed by Ranith Sardar.
---
 .../federation/metrics/FederationMetrics.java  |  2 +-
 .../federation/metrics/NamenodeBeanMetrics.java| 25 +++-
 .../hdfs/server/federation/router/Router.java  |  7 +
 .../src/main/webapps/router/federationhealth.js|  2 +-
 .../federation/router/TestRouterAdminCLI.java  | 33 +-
 5 files changed, 65 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
index b3fe6cc..c66910c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
@@ -582,7 +582,7 @@ public class FederationMetrics implements FederationMBean {
 
   @Override
   public String getRouterStatus() {
-return "RUNNING";
+return this.router.getRouterState().toString();
   }
 
   /**
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
index 5e95606..963c6c2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
@@ -45,6 +45,7 @@ import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo
 import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
 import org.apache.hadoop.hdfs.server.federation.router.Router;
 import org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer;
+import org.apache.hadoop.hdfs.server.federation.router.RouterServiceState;
 import 
org.apache.hadoop.hdfs.server.federation.router.SubClusterTimeoutException;
 import org.apache.hadoop.hdfs.server.federation.store.MembershipStore;
 import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
@@ -232,7 +233,29 @@ public class NamenodeBeanMetrics
 
   @Override
   public String getSafemode() {
-// We assume that the global federated view is never in safe mode
+try {
+  if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
+return "Safe mode is ON. " + this.getSafeModeTip();
+  }
+} catch (IOException e) {
+  return "Failed to get safemode status. Please check router"
+  + "log for more detail.";
+}
+return "";
+  }
+
+  private String getSafeModeTip() throws IOException {
+Router rt = getRouter();
+String cmd = "Use \"hdfs dfsrouteradmin -safemode leave\" "
++ "to turn safe mode off.";
+if (rt.isRouterState(RouterServiceState.INITIALIZING)
+|| rt.isRouterState(RouterServiceState.UNINITIALIZED)) {
+  return "Router is in" + rt.getRouterState()
+  + "mode, the router will immediately return to "
+  + "normal mode after some time. " + cmd;
+} else if (rt.isRouterState(RouterServiceState.SAFEMODE)) {
+  return "It was turned on manually. " + cmd;
+}
 return "";
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
index 6a7437f..0257162 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
@@ -585,6 +585,13 @@ public class Router extends CompositeService {
 return this.state;
   }
 
+  /**
+   * Compare router state.
+   */
+  public boolean isRouterState(RouterServiceState routerState) {
+return routerState.equals(this.state);
+  }
+
   /
   // Submodule getters
   /
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/federationhealth.js
index bb8e057..5da7b07 100644
--- 

[hadoop] 26/41: HDFS-14129. RBF: Create new policy provider for router. Contributed by Ranith Sardar.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b240f39d78ce563c975038c8274a69f3187c6d83
Author: Surendra Singh Lilhore 
AuthorDate: Tue Jan 15 16:40:39 2019 +0530

HDFS-14129. RBF: Create new policy provider for router. Contributed by 
Ranith Sardar.
---
 .../hadoop-common/src/main/conf/hadoop-policy.xml  | 10 ++
 .../java/org/apache/hadoop/fs/CommonConfigurationKeys.java |  2 ++
 .../java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java|  5 +
 .../apache/hadoop/hdfs/protocolPB/RouterAdminProtocolPB.java   |  6 +++---
 .../hdfs/server/federation/router/RouterAdminServer.java   | 10 --
 .../hadoop/hdfs/server/federation/router/RouterRpcServer.java  |  4 ++--
 .../apache/hadoop/fs/contract/router/RouterHDFSContract.java   |  4 
 7 files changed, 30 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml 
b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
index bd7c111..e1640f9 100644
--- a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
+++ b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml
@@ -110,6 +110,16 @@
   
 
   
+security.router.admin.protocol.acl
+*
+ACL for RouterAdmin Protocol. The ACL is a comma-separated
+list of user and group names. The user and
+group list is separated by a blank. For e.g. "alice,bob users,wheel".
+A special value of "*" means all users are allowed.
+
+  
+
+  
 security.zkfc.protocol.acl
 *
 ACL for access to the ZK Failover Controller
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
index 72e5309..8204c0d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
@@ -216,6 +216,8 @@ public class CommonConfigurationKeys extends 
CommonConfigurationKeysPublic {
   SECURITY_CLIENT_PROTOCOL_ACL = "security.client.protocol.acl";
   public static final String SECURITY_CLIENT_DATANODE_PROTOCOL_ACL =
   "security.client.datanode.protocol.acl";
+  public static final String SECURITY_ROUTER_ADMIN_PROTOCOL_ACL =
+  "security.router.admin.protocol.acl";
   public static final String
   SECURITY_DATANODE_PROTOCOL_ACL = "security.datanode.protocol.acl";
   public static final String
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 6de186a..c449a2e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -92,6 +92,11 @@ public final class HdfsConstants {
*/
   public static final String CLIENT_NAMENODE_PROTOCOL_NAME =
   "org.apache.hadoop.hdfs.protocol.ClientProtocol";
+  /**
+   * Router admin Protocol Names.
+   */
+  public static final String ROUTER_ADMIN_PROTOCOL_NAME =
+  "org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol";
 
   // Timeouts for communicating with DataNode for streaming writes/reads
   public static final int READ_TIMEOUT = 60 * 1000;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolPB.java
index 96fa794..d308616 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolPB.java
@@ -19,10 +19,10 @@ package org.apache.hadoop.hdfs.protocolPB;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import 
org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos.RouterAdminProtocolService;
 import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector;
+import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys;
 import org.apache.hadoop.ipc.ProtocolInfo;
 import org.apache.hadoop.security.KerberosInfo;
 import org.apache.hadoop.security.token.TokenInfo;
@@ -35,9 +35,9 @@ import org.apache.hadoop.security.token.TokenInfo;
 @InterfaceAudience.Private
 @InterfaceStability.Stable
 

[hadoop] 04/41: HDFS-14024. RBF: ProvidedCapacityTotal json exception in NamenodeHeartbeatService. Contributed by CR Hota.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit dde38f7d93badf9fceeba3f07b4a792c07a6ca52
Author: Inigo Goiri 
AuthorDate: Thu Nov 1 11:49:33 2018 -0700

HDFS-14024. RBF: ProvidedCapacityTotal json exception in 
NamenodeHeartbeatService. Contributed by CR Hota.
---
 .../hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
index a1adf77..1349aa3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
@@ -351,7 +351,7 @@ public class NamenodeHeartbeatService extends 
PeriodicService {
 jsonObject.getLong("PendingReplicationBlocks"),
 jsonObject.getLong("UnderReplicatedBlocks"),
 jsonObject.getLong("PendingDeletionBlocks"),
-jsonObject.getLong("ProvidedCapacityTotal"));
+jsonObject.optLong("ProvidedCapacityTotal"));
   }
 }
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 19/41: HDFS-13443. RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries. Contributed by Mohammad Arshad.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit c49a422d89dcd3815d9800e1efeb7fdae3269a19
Author: Yiqun Lin 
AuthorDate: Wed Dec 19 11:40:00 2018 +0800

HDFS-13443. RBF: Update mount table cache immediately after changing 
(add/update/remove) mount table entries. Contributed by Mohammad Arshad.
---
 .../RouterAdminProtocolServerSideTranslatorPB.java |  23 ++
 .../RouterAdminProtocolTranslatorPB.java   |  21 ++
 .../federation/resolver/MountTableManager.java |  16 +
 .../router/MountTableRefresherService.java | 289 +++
 .../router/MountTableRefresherThread.java  |  96 +
 .../server/federation/router/RBFConfigKeys.java|  25 ++
 .../hdfs/server/federation/router/Router.java  |  53 ++-
 .../federation/router/RouterAdminServer.java   |  28 +-
 .../federation/router/RouterHeartbeatService.java  |   5 +
 .../server/federation/store/MountTableStore.java   |  24 ++
 .../server/federation/store/StateStoreUtils.java   |  26 ++
 .../federation/store/impl/MountTableStoreImpl.java |  18 +
 .../protocol/RefreshMountTableEntriesRequest.java  |  34 ++
 .../protocol/RefreshMountTableEntriesResponse.java |  44 +++
 .../pb/RefreshMountTableEntriesRequestPBImpl.java  |  67 
 .../pb/RefreshMountTableEntriesResponsePBImpl.java |  74 
 .../federation/store/records/RouterState.java  |   4 +
 .../store/records/impl/pb/RouterStatePBImpl.java   |  10 +
 .../hadoop/hdfs/tools/federation/RouterAdmin.java  |  33 +-
 .../src/main/proto/FederationProtocol.proto|   8 +
 .../src/main/proto/RouterProtocol.proto|   5 +
 .../src/main/resources/hdfs-rbf-default.xml|  34 ++
 .../src/site/markdown/HDFSRouterFederation.md  |   9 +
 .../server/federation/FederationTestUtils.java |  27 ++
 .../server/federation/RouterConfigBuilder.java |  12 +
 .../federation/router/TestRouterAdminCLI.java  |  25 +-
 .../router/TestRouterMountTableCacheRefresh.java   | 396 +
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |   2 +
 28 files changed, 1402 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
index 6341ebd..a31c46d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
@@ -37,6 +37,8 @@ import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProt
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.LeaveSafeModeResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RefreshMountTableEntriesResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.UpdateMountTableEntryRequestProto;
@@ -58,6 +60,8 @@ import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeReques
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeRequest;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.LeaveSafeModeResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RefreshMountTableEntriesResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
@@ -78,6 +82,8 @@ import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetSafeMo
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetSafeModeResponsePBImpl;
 import 

[hadoop] 09/41: HDFS-14082. RBF: Add option to fail operations when a subcluster is unavailable. Contributed by Inigo Goiri.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 0b67a7ddc84d5757c20046dd4431446a7b671a40
Author: Yiqun Lin 
AuthorDate: Wed Nov 21 10:40:26 2018 +0800

HDFS-14082. RBF: Add option to fail operations when a subcluster is 
unavailable. Contributed by Inigo Goiri.
---
 .../server/federation/router/RBFConfigKeys.java|  4 ++
 .../federation/router/RouterClientProtocol.java| 15 --
 .../server/federation/router/RouterRpcServer.java  |  9 
 .../src/main/resources/hdfs-rbf-default.xml| 10 
 .../router/TestRouterRpcMultiDestination.java  | 59 ++
 5 files changed, 93 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index dd72e36..10018fe 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -125,6 +125,10 @@ public class RBFConfigKeys extends 
CommonConfigurationKeysPublic {
   public static final String DFS_ROUTER_CLIENT_REJECT_OVERLOAD =
   FEDERATION_ROUTER_PREFIX + "client.reject.overload";
   public static final boolean DFS_ROUTER_CLIENT_REJECT_OVERLOAD_DEFAULT = 
false;
+  public static final String DFS_ROUTER_ALLOW_PARTIAL_LIST =
+  FEDERATION_ROUTER_PREFIX + "client.allow-partial-listing";
+  public static final boolean DFS_ROUTER_ALLOW_PARTIAL_LIST_DEFAULT = true;
+
 
   // HDFS Router State Store connection
   public static final String FEDERATION_FILE_RESOLVER_CLIENT_CLASS =
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 9e2979b..6c44362 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -112,6 +112,9 @@ public class RouterClientProtocol implements ClientProtocol 
{
   private final FileSubclusterResolver subclusterResolver;
   private final ActiveNamenodeResolver namenodeResolver;
 
+  /** If it requires response from all subclusters. */
+  private final boolean allowPartialList;
+
   /** Identifier for the super user. */
   private final String superUser;
   /** Identifier for the super group. */
@@ -125,6 +128,10 @@ public class RouterClientProtocol implements 
ClientProtocol {
 this.subclusterResolver = rpcServer.getSubclusterResolver();
 this.namenodeResolver = rpcServer.getNamenodeResolver();
 
+this.allowPartialList = conf.getBoolean(
+RBFConfigKeys.DFS_ROUTER_ALLOW_PARTIAL_LIST,
+RBFConfigKeys.DFS_ROUTER_ALLOW_PARTIAL_LIST_DEFAULT);
+
 // User and group for reporting
 this.superUser = System.getProperty("user.name");
 this.superGroup = conf.get(
@@ -608,8 +615,8 @@ public class RouterClientProtocol implements ClientProtocol 
{
 new Class[] {String.class, startAfter.getClass(), boolean.class},
 new RemoteParam(), startAfter, needLocation);
 Map listings =
-rpcClient.invokeConcurrent(
-locations, method, false, false, DirectoryListing.class);
+rpcClient.invokeConcurrent(locations, method,
+!this.allowPartialList, false, DirectoryListing.class);
 
 Map nnListing = new TreeMap<>();
 int totalRemainingEntries = 0;
@@ -998,8 +1005,8 @@ public class RouterClientProtocol implements 
ClientProtocol {
   RemoteMethod method = new RemoteMethod("getContentSummary",
   new Class[] {String.class}, new RemoteParam());
   Map results =
-  rpcClient.invokeConcurrent(
-  locations, method, false, false, ContentSummary.class);
+  rpcClient.invokeConcurrent(locations, method,
+  !this.allowPartialList, false, ContentSummary.class);
   summaries.addAll(results.values());
 } catch (FileNotFoundException e) {
   notFoundException = e;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index fcb35f4..ad5980b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ 

[hadoop] 37/41: HDFS-14230. RBF: Throw RetriableException instead of IOException when no namenodes available. Contributed by Fei Hui.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b28580a6251732f8eabb76251e61e8e8902f3d2b
Author: Inigo Goiri 
AuthorDate: Tue Feb 12 10:44:02 2019 -0800

HDFS-14230. RBF: Throw RetriableException instead of IOException when no 
namenodes available. Contributed by Fei Hui.
---
 .../federation/metrics/FederationRPCMBean.java |  2 +
 .../federation/metrics/FederationRPCMetrics.java   | 11 +++
 .../metrics/FederationRPCPerformanceMonitor.java   |  5 ++
 .../router/NoNamenodesAvailableException.java  | 33 +
 .../server/federation/router/RouterRpcClient.java  | 16 +++-
 .../server/federation/router/RouterRpcMonitor.java |  5 ++
 .../server/federation/FederationTestUtils.java | 38 ++
 .../router/TestRouterClientRejectOverload.java | 86 --
 .../router/TestRouterRPCClientRetries.java |  2 +-
 9 files changed, 188 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMBean.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMBean.java
index 973c398..76b3ca6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMBean.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMBean.java
@@ -46,6 +46,8 @@ public interface FederationRPCMBean {
 
   long getProxyOpRetries();
 
+  long getProxyOpNoNamenodes();
+
   long getRouterFailureStateStoreOps();
 
   long getRouterFailureReadOnlyOps();
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
index cce4b86..8e57c6b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCMetrics.java
@@ -60,6 +60,8 @@ public class FederationRPCMetrics implements 
FederationRPCMBean {
   private MutableCounterLong proxyOpNotImplemented;
   @Metric("Number of operation retries")
   private MutableCounterLong proxyOpRetries;
+  @Metric("Number of operations to hit no namenodes available")
+  private MutableCounterLong proxyOpNoNamenodes;
 
   @Metric("Failed requests due to State Store unavailable")
   private MutableCounterLong routerFailureStateStore;
@@ -138,6 +140,15 @@ public class FederationRPCMetrics implements 
FederationRPCMBean {
 return proxyOpRetries.value();
   }
 
+  public void incrProxyOpNoNamenodes() {
+proxyOpNoNamenodes.incr();
+  }
+
+  @Override
+  public long getProxyOpNoNamenodes() {
+return proxyOpNoNamenodes.value();
+  }
+
   public void incrRouterFailureStateStore() {
 routerFailureStateStore.incr();
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
index 15725d1..cbd63de 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationRPCPerformanceMonitor.java
@@ -171,6 +171,11 @@ public class FederationRPCPerformanceMonitor implements 
RouterRpcMonitor {
   }
 
   @Override
+  public void proxyOpNoNamenodes() {
+metrics.incrProxyOpNoNamenodes();
+  }
+
+  @Override
   public void routerFailureStateStore() {
 metrics.incrRouterFailureStateStore();
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NoNamenodesAvailableException.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NoNamenodesAvailableException.java
new file mode 100644
index 000..7eabf00
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NoNamenodesAvailableException.java
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except 

[hadoop] 06/41: HDFS-12284. addendum to HDFS-12284. Contributed by Inigo Goiri.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 30573af0579ac3db9b7332785403c8b980d6d396
Author: Brahma Reddy Battula 
AuthorDate: Wed Nov 7 07:37:02 2018 +0530

HDFS-12284. addendum to HDFS-12284. Contributed by Inigo Goiri.
---
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
index f38205a..014e0d5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml
@@ -36,7 +36,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   
 
   org.bouncycastle
-  bcprov-jdk16
+  bcprov-jdk15on
   test
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 40/41: HDFS-14268. RBF: Fix the location of the DNs in getDatanodeReport(). Contributed by Inigo Goiri.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 22d23ded7e934faf7d95e34d010753ec94500242
Author: Giovanni Matteo Fumarola 
AuthorDate: Fri Feb 15 10:47:17 2019 -0800

HDFS-14268. RBF: Fix the location of the DNs in getDatanodeReport(). 
Contributed by Inigo Goiri.
---
 .../hadoop/hdfs/protocol/ECBlockGroupStats.java| 71 ++
 .../server/federation/router/ErasureCoding.java| 29 +
 .../server/federation/router/RouterRpcClient.java  | 19 ++
 .../server/federation/router/TestRouterRpc.java| 48 +++
 4 files changed, 114 insertions(+), 53 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ECBlockGroupStats.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ECBlockGroupStats.java
index 3dde604..1ead5c1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ECBlockGroupStats.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ECBlockGroupStats.java
@@ -17,6 +17,10 @@
  */
 package org.apache.hadoop.hdfs.protocol;
 
+import java.util.Collection;
+
+import org.apache.commons.lang3.builder.EqualsBuilder;
+import org.apache.commons.lang3.builder.HashCodeBuilder;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 
@@ -103,4 +107,71 @@ public final class ECBlockGroupStats {
 statsBuilder.append("]");
 return statsBuilder.toString();
   }
+
+  @Override
+  public int hashCode() {
+return new HashCodeBuilder()
+.append(lowRedundancyBlockGroups)
+.append(corruptBlockGroups)
+.append(missingBlockGroups)
+.append(bytesInFutureBlockGroups)
+.append(pendingDeletionBlocks)
+.append(highestPriorityLowRedundancyBlocks)
+.toHashCode();
+  }
+
+  @Override
+  public boolean equals(Object o) {
+if (this == o) {
+  return true;
+}
+if (o == null || getClass() != o.getClass()) {
+  return false;
+}
+ECBlockGroupStats other = (ECBlockGroupStats)o;
+return new EqualsBuilder()
+.append(lowRedundancyBlockGroups, other.lowRedundancyBlockGroups)
+.append(corruptBlockGroups, other.corruptBlockGroups)
+.append(missingBlockGroups, other.missingBlockGroups)
+.append(bytesInFutureBlockGroups, other.bytesInFutureBlockGroups)
+.append(pendingDeletionBlocks, other.pendingDeletionBlocks)
+.append(highestPriorityLowRedundancyBlocks,
+other.highestPriorityLowRedundancyBlocks)
+.isEquals();
+  }
+
+  /**
+   * Merge the multiple ECBlockGroupStats.
+   * @param stats Collection of stats to merge.
+   * @return A new ECBlockGroupStats merging all the input ones
+   */
+  public static ECBlockGroupStats merge(Collection stats) {
+long lowRedundancyBlockGroups = 0;
+long corruptBlockGroups = 0;
+long missingBlockGroups = 0;
+long bytesInFutureBlockGroups = 0;
+long pendingDeletionBlocks = 0;
+long highestPriorityLowRedundancyBlocks = 0;
+boolean hasHighestPriorityLowRedundancyBlocks = false;
+
+for (ECBlockGroupStats stat : stats) {
+  lowRedundancyBlockGroups += stat.getLowRedundancyBlockGroups();
+  corruptBlockGroups += stat.getCorruptBlockGroups();
+  missingBlockGroups += stat.getMissingBlockGroups();
+  bytesInFutureBlockGroups += stat.getBytesInFutureBlockGroups();
+  pendingDeletionBlocks += stat.getPendingDeletionBlocks();
+  if (stat.hasHighestPriorityLowRedundancyBlocks()) {
+hasHighestPriorityLowRedundancyBlocks = true;
+highestPriorityLowRedundancyBlocks +=
+stat.getHighestPriorityLowRedundancyBlocks();
+  }
+}
+if (hasHighestPriorityLowRedundancyBlocks) {
+  return new ECBlockGroupStats(lowRedundancyBlockGroups, 
corruptBlockGroups,
+  missingBlockGroups, bytesInFutureBlockGroups, pendingDeletionBlocks,
+  highestPriorityLowRedundancyBlocks);
+}
+return new ECBlockGroupStats(lowRedundancyBlockGroups, corruptBlockGroups,
+missingBlockGroups, bytesInFutureBlockGroups, pendingDeletionBlocks);
+  }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
index f4584b1..97c5f6a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/ErasureCoding.java
@@ -187,33 +187,6 @@ public class ErasureCoding {
 

[hadoop] branch trunk updated: HDFS-14302. Refactor NameNodeWebHdfsMethods#generateDelegationToken() to allow better extensibility. Contributed by CR Hota.

2019-02-20 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f5b4e0f  HDFS-14302. Refactor 
NameNodeWebHdfsMethods#generateDelegationToken() to allow better extensibility. 
Contributed by CR Hota.
f5b4e0f is described below

commit f5b4e0f971b138666a1f7015f387ae960f85d589
Author: Inigo Goiri 
AuthorDate: Wed Feb 20 13:55:13 2019 -0800

HDFS-14302. Refactor NameNodeWebHdfsMethods#generateDelegationToken() to 
allow better extensibility. Contributed by CR Hota.
---
 .../server/namenode/web/resources/NamenodeWebHdfsMethods.java  | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
index fe64ad6..0dea48a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
@@ -350,12 +350,18 @@ public class NamenodeWebHdfsMethods {
 cp.cancelDelegationToken(token);
   }
 
-  public Token generateDelegationToken(
-  final UserGroupInformation ugi,
+  public Credentials createCredentials(final UserGroupInformation ugi,
   final String renewer) throws IOException {
 final NameNode namenode = (NameNode)context.getAttribute("name.node");
 final Credentials c = DelegationTokenSecretManager.createCredentials(
 namenode, ugi, renewer != null? renewer: ugi.getShortUserName());
+return c;
+  }
+
+  public Token generateDelegationToken(
+  final UserGroupInformation ugi,
+  final String renewer) throws IOException {
+Credentials c = createCredentials(ugi, renewer);
 if (c == null) {
   return null;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1053. Generate RaftGroupId from OMServiceID. Contributed by Aravindan Vijayan.

2019-02-20 Thread arp
This is an automated email from the ASF dual-hosted git repository.

arp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 676a9cb  HDDS-1053. Generate RaftGroupId from OMServiceID. Contributed 
by Aravindan Vijayan.
676a9cb is described below

commit 676a9cbbfa80e8eeeda7a272971e1b3354f8
Author: Arpit Agarwal 
AuthorDate: Wed Feb 20 12:57:49 2019 -0800

HDDS-1053. Generate RaftGroupId from OMServiceID. Contributed by Aravindan 
Vijayan.
---
 .../java/org/apache/hadoop/ozone/OzoneConsts.java  |  2 +-
 .../ozone/om/ratis/OzoneManagerRatisServer.java|  8 +++-
 .../om/ratis/TestOzoneManagerRatisServer.java  | 49 ++
 3 files changed, 56 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
index 2931a54..37cfb7f 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
@@ -273,5 +273,5 @@ public final class OzoneConsts {
   Metadata.Key.of(OZONE_USER, ASCII_STRING_MARSHALLER);
 
   // Default OMServiceID for OM Ratis servers to use as RaftGroupId
-  public static final String OM_SERVICE_ID_DEFAULT = "om-service-value";
+  public static final String OM_SERVICE_ID_DEFAULT = "omServiceIdDefault";
 }
diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
index 2cac258..8baa03b 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
@@ -26,6 +26,7 @@ import java.net.InetSocketAddress;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
+import java.util.UUID;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.conf.Configuration;
@@ -48,7 +49,6 @@ import org.apache.ratis.rpc.SupportedRpcType;
 import org.apache.ratis.server.RaftServer;
 import org.apache.ratis.server.RaftServerConfigKeys;
 import org.apache.ratis.statemachine.impl.BaseStateMachine;
-import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
 import org.apache.ratis.util.LifeCycle;
 import org.apache.ratis.util.SizeInBytes;
 import org.apache.ratis.util.TimeDuration;
@@ -91,7 +91,7 @@ public final class OzoneManagerRatisServer {
 
 this.raftPeerId = localRaftPeerId;
 this.raftGroupId = RaftGroupId.valueOf(
-ByteString.copyFromUtf8(raftGroupIdStr));
+getRaftGroupIdFromOmServiceId(raftGroupIdStr));
 this.raftGroup = RaftGroup.valueOf(raftGroupId, raftPeers);
 
 StringBuilder raftPeersStr = new StringBuilder();
@@ -355,4 +355,8 @@ public final class OzoneManagerRatisServer {
 }
 return storageDir;
   }
+
+  private UUID getRaftGroupIdFromOmServiceId(String omServiceId) {
+return UUID.nameUUIDFromBytes(omServiceId.getBytes());
+  }
 }
diff --git 
a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
 
b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
index ffa6680..83d2245 100644
--- 
a/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
+++ 
b/hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
@@ -38,6 +38,7 @@ import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
 .OMResponse;
 import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.ratis.protocol.RaftGroupId;
 import org.apache.ratis.util.LifeCycle;
 import org.junit.After;
 import org.junit.Assert;
@@ -152,4 +153,52 @@ public class TestOzoneManagerRatisServer {
   logCapturer.clearOutput();
 }
   }
+
+  @Test
+  public void verifyRaftGroupIdGenerationWithDefaultOmServiceId() throws
+  Exception {
+UUID uuid = UUID.nameUUIDFromBytes(OzoneConsts.OM_SERVICE_ID_DEFAULT
+.getBytes());
+RaftGroupId raftGroupId = omRatisServer.getRaftGroup().getGroupId();
+Assert.assertEquals(uuid, raftGroupId.getUuid());
+Assert.assertEquals(raftGroupId.toByteString().size(), 16);
+  }
+
+  @Test
+  public void verifyRaftGroupIdGenerationWithCustomOmServiceId() throws
+  Exception {
+String customOmServiceId = "omSIdCustom123";
+OzoneConfiguration newConf = new OzoneConfiguration();
+String newOmId = UUID.randomUUID().toString();
+String path = GenericTestUtils.getTempPath(newOmId);
+Path 

[hadoop] branch trunk updated: HDFS-14267. Add test_libhdfs_ops to libhdfs tests, mark libhdfs_read/write.c as examples. Contributed by Sahil Takiar.

2019-02-20 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a30059b  HDFS-14267. Add test_libhdfs_ops to libhdfs tests, mark 
libhdfs_read/write.c as examples. Contributed by Sahil Takiar.
a30059b is described below

commit a30059bb61ba6f94b0a237c9e1ce1b3f871f7e6f
Author: Sahil Takiar 
AuthorDate: Wed Feb 20 11:36:37 2019 -0800

HDFS-14267. Add test_libhdfs_ops to libhdfs tests, mark 
libhdfs_read/write.c as examples. Contributed by Sahil Takiar.

Signed-off-by: Wei-Chiu Chuang 
---
 .../hadoop-hdfs-native-client/src/CMakeLists.txt   |   1 +
 .../main/native/libhdfs-examples/CMakeLists.txt|  34 ++
 .../src/main/native/libhdfs-examples/README.md |  24 +
 .../libhdfs_read.c}|  15 ++-
 .../libhdfs_write.c}   |  13 ++-
 .../main/native/libhdfs-examples}/test-libhdfs.sh  |   6 +-
 .../main/native/libhdfs-tests/test_libhdfs_ops.c   | 119 ++---
 .../src/main/native/libhdfs/CMakeLists.txt |   8 +-
 8 files changed, 167 insertions(+), 53 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt
index a962f94..626c49b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt
@@ -146,6 +146,7 @@ endif()
 
 add_subdirectory(main/native/libhdfs)
 add_subdirectory(main/native/libhdfs-tests)
+add_subdirectory(main/native/libhdfs-examples)
 
 # Temporary fix to disable Libhdfs++ build on older systems that do not 
support thread_local
 include(CheckCXXSourceCompiles)
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/CMakeLists.txt
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/CMakeLists.txt
new file mode 100644
index 000..1d33639
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/CMakeLists.txt
@@ -0,0 +1,34 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+cmake_minimum_required(VERSION 3.1 FATAL_ERROR)
+
+include_directories(
+${CMAKE_CURRENT_SOURCE_DIR}/../libhdfs/include
+${GENERATED_JAVAH}
+${CMAKE_BINARY_DIR}
+${CMAKE_CURRENT_SOURCE_DIR}/../libhdfs
+${JNI_INCLUDE_DIRS}
+${OS_DIR}
+)
+
+add_executable(hdfs_read libhdfs_read.c)
+target_link_libraries(hdfs_read hdfs)
+
+add_executable(hdfs_write libhdfs_write.c)
+target_link_libraries(hdfs_write hdfs)
\ No newline at end of file
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/README.md
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/README.md
new file mode 100644
index 000..c962feb
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/README.md
@@ -0,0 +1,24 @@
+
+
+The files in this directory are purely meant to provide additional examples 
for how to use libhdfs. They are compiled as
+part of the build and are thus guaranteed to compile against the associated 
version of lidhdfs. However, no tests exists
+for these examples so their functionality is not guaranteed.
+
+The examples are written to run against a mini-dfs cluster. The script 
`test-libhdfs.sh` can setup a mini DFS cluster
+that the examples can run against. Again, none of this is tested and is thus 
not guaranteed to work.
\ No newline at end of file
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_read.c
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/libhdfs_read.c
similarity index 91%
rename from 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_read.c
rename to 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-examples/libhdfs_read.c
index 4b90f2a..419be12 100644
--- 

[hadoop] branch trunk updated: HDDS-1060. Add API to get OM certificate from SCM CA. Contributed by Ajay Kumar.

2019-02-20 Thread xyao
This is an automated email from the ASF dual-hosted git repository.

xyao pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1374f8f  HDDS-1060. Add API to get OM certificate from SCM CA. 
Contributed by Ajay Kumar.
1374f8f is described below

commit 1374f8f548a64d8b3b4a6352969ce24cc1d34f46
Author: Xiaoyu Yao 
AuthorDate: Wed Feb 20 11:11:36 2019 -0800

HDDS-1060. Add API to get OM certificate from SCM CA. Contributed by Ajay 
Kumar.
---
 .../hadoop/hdds/scm/client/HddsClientUtils.java| 38 +++
 .../java/org/apache/hadoop/hdds/HddsUtils.java | 50 
 .../hadoop/hdds/protocol/SCMSecurityProtocol.java  | 23 -
 .../SCMSecurityProtocolClientSideTranslatorPB.java | 40 
 .../SCMSecurityProtocolServerSideTranslatorPB.java | 35 ++
 .../certificate/authority/CertificateServer.java   | 26 ++
 .../certificate/authority/DefaultCAServer.java | 18 +++
 .../src/main/proto/SCMSecurityProtocol.proto   | 24 ++
 .../hdds/scm/server/SCMSecurityProtocolServer.java | 43 +
 .../hadoop/ozone/TestSecureOzoneCluster.java   | 55 +-
 10 files changed, 340 insertions(+), 12 deletions(-)

diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
index 9c59038..be9bc93 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/HddsClientUtils.java
@@ -22,15 +22,29 @@ import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.HddsUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
+import 
org.apache.hadoop.hdds.protocolPB.SCMSecurityProtocolClientSideTranslatorPB;
+import org.apache.hadoop.hdds.protocolPB.SCMSecurityProtocolPB;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol;
+import org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolPB;
+import org.apache.hadoop.ipc.Client;
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.http.client.config.RequestConfig;
 import org.apache.http.impl.client.CloseableHttpClient;
 import org.apache.http.impl.client.HttpClients;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import java.io.IOException;
+import java.net.InetSocketAddress;
 import java.text.ParseException;
 import java.time.Instant;
 import java.time.ZoneId;
@@ -38,6 +52,7 @@ import java.time.ZonedDateTime;
 import java.time.format.DateTimeFormatter;
 import java.util.concurrent.TimeUnit;
 
+
 /**
  * Utility methods for Ozone and Container Clients.
  *
@@ -252,4 +267,27 @@ public final class HddsClientUtils {
 ScmConfigKeys
 .SCM_CONTAINER_CLIENT_MAX_OUTSTANDING_REQUESTS_DEFAULT);
   }
+
+  /**
+   * Create a scm block client, used by putKey() and getKey().
+   *
+   * @return {@link ScmBlockLocationProtocol}
+   * @throws IOException
+   */
+  public static SCMSecurityProtocol getScmSecurityClient(
+  OzoneConfiguration conf, UserGroupInformation ugi) throws IOException {
+RPC.setProtocolEngine(conf, SCMSecurityProtocolPB.class,
+ProtobufRpcEngine.class);
+long scmVersion =
+RPC.getProtocolVersion(ScmBlockLocationProtocolPB.class);
+InetSocketAddress scmSecurityProtoAdd =
+HddsUtils.getScmAddressForSecurityProtocol(conf);
+SCMSecurityProtocolClientSideTranslatorPB scmSecurityClient =
+new SCMSecurityProtocolClientSideTranslatorPB(
+RPC.getProxy(SCMSecurityProtocolPB.class, scmVersion,
+scmSecurityProtoAdd, ugi, conf,
+NetUtils.getDefaultSocketFactory(conf),
+Client.getRpcTimeout(conf)));
+return scmSecurityClient;
+  }
 }
\ No newline at end of file
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
index 9bae6d8..1556a57 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
@@ -410,4 +410,54 @@ public final class HddsUtils {
   public static long getUtcTime() {
 return 

[hadoop] branch HDFS-13891 updated: HDFS-14249. RBF: Tooling to identify the subcluster location of a file. Contributed by Inigo Goiri.

2019-02-20 Thread gifuma
This is an automated email from the ASF dual-hosted git repository.

gifuma pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new 215e525  HDFS-14249. RBF: Tooling to identify the subcluster location 
of a file. Contributed by Inigo Goiri.
215e525 is described below

commit 215e525c8c0bc410e94c96eff1e81e867fbe5fa8
Author: Giovanni Matteo Fumarola 
AuthorDate: Wed Feb 20 11:08:55 2019 -0800

HDFS-14249. RBF: Tooling to identify the subcluster location of a file. 
Contributed by Inigo Goiri.
---
 .../RouterAdminProtocolServerSideTranslatorPB.java |  22 
 .../RouterAdminProtocolTranslatorPB.java   |  21 +++
 .../metrics/FederationRPCPerformanceMonitor.java   |   8 +-
 .../federation/resolver/MountTableManager.java |  12 ++
 .../federation/router/RouterAdminServer.java   |  36 ++
 .../federation/store/impl/MountTableStoreImpl.java |   7 +
 .../store/protocol/GetDestinationRequest.java  |  57 
 .../store/protocol/GetDestinationResponse.java |  59 +
 .../impl/pb/GetDestinationRequestPBImpl.java   |  73 +++
 .../impl/pb/GetDestinationResponsePBImpl.java  |  83 
 .../hadoop/hdfs/tools/federation/RouterAdmin.java  |  28 +++-
 .../src/main/proto/FederationProtocol.proto|   8 ++
 .../src/main/proto/RouterProtocol.proto|   5 +
 .../src/site/markdown/HDFSRouterFederation.md  |   4 +
 .../federation/router/TestRouterAdminCLI.java  |  64 -
 ...erRPCMultipleDestinationMountTableResolver.java | 144 +
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |   2 +
 17 files changed, 628 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
index a31c46d..6f6724e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/protocolPB/RouterAdminProtocolServerSideTranslatorPB.java
@@ -31,6 +31,8 @@ import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProt
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.EnterSafeModeResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDisabledNameservicesResponseProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetDestinationResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesRequestProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetMountTableEntriesResponseProto;
 import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.GetSafeModeRequestProto;
@@ -54,6 +56,8 @@ import 
org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeRequ
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.EnterSafeModeResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesRequest;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetDisabledNameservicesResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetDestinationResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetSafeModeRequest;
@@ -76,6 +80,8 @@ import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnterSafe
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.EnterSafeModeResponsePBImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDisabledNameservicesRequestPBImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDisabledNameservicesResponsePBImpl;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationRequestPBImpl;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetDestinationResponsePBImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesRequestPBImpl;
 import 

[hadoop] branch branch-3.2 updated: HADOOP-16104. Wasb tests to downgrade to skip when test a/c is namespace enabled. Contributed by Masatake Iwasaki.

2019-02-20 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new dc9c3ce  HADOOP-16104. Wasb tests to downgrade to skip when test a/c 
is namespace enabled. Contributed by Masatake Iwasaki.
dc9c3ce is described below

commit dc9c3ce30b9e043ef8e6c0d4d2faa185ffdefb4f
Author: Masatake Iwasaki 
AuthorDate: Wed Feb 20 22:00:57 2019 +0900

HADOOP-16104. Wasb tests to downgrade to skip when test a/c is namespace 
enabled. Contributed by Masatake Iwasaki.

(cherry picked from commit aa3ad3660506382884324c4b8997973f5a68e29a)
---
 .../org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java  | 3 +++
 .../hadoop/fs/azure/contract/NativeAzureFileSystemContract.java  | 1 +
 .../org/apache/hadoop/fs/azure/integration/AzureTestUtils.java   | 9 +
 hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml  | 5 +
 hadoop-tools/hadoop-azure/src/test/resources/wasb.xml| 7 ++-
 5 files changed, 24 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java
index b65ce78..816a3af 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java
@@ -32,6 +32,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.azure.integration.AzureTestConstants;
 import org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation;
 import org.apache.hadoop.fs.azure.metrics.AzureFileSystemMetricsSystem;
+import org.apache.hadoop.fs.azure.integration.AzureTestUtils;
 import org.apache.hadoop.metrics2.AbstractMetric;
 import org.apache.hadoop.metrics2.MetricsRecord;
 import org.apache.hadoop.metrics2.MetricsSink;
@@ -529,6 +530,8 @@ public final class AzureBlobStorageTestAccount implements 
AutoCloseable,
 
   static CloudStorageAccount createTestAccount(Configuration conf)
   throws URISyntaxException, KeyProviderException {
+AzureTestUtils.assumeNamespaceDisabled(conf);
+
 String testAccountName = verifyWasbAccountNameInConfig(conf);
 if (testAccountName == null) {
   LOG.warn("Skipping live Azure test because of missing test account");
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/contract/NativeAzureFileSystemContract.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/contract/NativeAzureFileSystemContract.java
index a264aca..ea90a86 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/contract/NativeAzureFileSystemContract.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/contract/NativeAzureFileSystemContract.java
@@ -34,6 +34,7 @@ public class NativeAzureFileSystemContract extends 
AbstractBondedFSContract {
   public NativeAzureFileSystemContract(Configuration conf) {
 super(conf); //insert the base features
 addConfResource(CONTRACT_XML);
+AzureTestUtils.assumeNamespaceDisabled(conf);
   }
 
   @Override
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/integration/AzureTestUtils.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/integration/AzureTestUtils.java
index c46320a..bc19700 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/integration/AzureTestUtils.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/integration/AzureTestUtils.java
@@ -47,6 +47,7 @@ import static org.junit.Assume.assumeTrue;
 import static 
org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount.WASB_ACCOUNT_NAME_DOMAIN_SUFFIX_REGEX;
 import static 
org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount.WASB_TEST_ACCOUNT_NAME_WITH_DOMAIN;
 import static org.apache.hadoop.fs.azure.integration.AzureTestConstants.*;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT;
 import static org.apache.hadoop.test.MetricsAsserts.getLongCounter;
 import static org.apache.hadoop.test.MetricsAsserts.getLongGauge;
 import static org.apache.hadoop.test.MetricsAsserts.getMetrics;
@@ -545,4 +546,12 @@ public final class AzureTestUtils extends Assert {
 inputStream.close();
 return new String(buffer, 0, count);
   }
+
+  /**
+   * Assume hierarchical namespace is disabled for test account.
+   */
+  public static void assumeNamespaceDisabled(Configuration conf) {
+Assume.assumeFalse("Hierarchical namespace is enabled for test account.",
+conf.getBoolean(FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT, false));
+  }
 }
diff --git 

[hadoop] branch trunk updated: HADOOP-16104. Wasb tests to downgrade to skip when test a/c is namespace enabled. Contributed by Masatake Iwasaki.

2019-02-20 Thread iwasakims
This is an automated email from the ASF dual-hosted git repository.

iwasakims pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new aa3ad36  HADOOP-16104. Wasb tests to downgrade to skip when test a/c 
is namespace enabled. Contributed by Masatake Iwasaki.
aa3ad36 is described below

commit aa3ad3660506382884324c4b8997973f5a68e29a
Author: Masatake Iwasaki 
AuthorDate: Wed Feb 20 22:00:57 2019 +0900

HADOOP-16104. Wasb tests to downgrade to skip when test a/c is namespace 
enabled. Contributed by Masatake Iwasaki.
---
 .../org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java  | 3 +++
 .../hadoop/fs/azure/contract/NativeAzureFileSystemContract.java  | 1 +
 .../org/apache/hadoop/fs/azure/integration/AzureTestUtils.java   | 9 +
 hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml  | 5 +
 hadoop-tools/hadoop-azure/src/test/resources/wasb.xml| 7 ++-
 5 files changed, 24 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java
index b65ce78..816a3af 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/AzureBlobStorageTestAccount.java
@@ -32,6 +32,7 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.azure.integration.AzureTestConstants;
 import org.apache.hadoop.fs.azure.metrics.AzureFileSystemInstrumentation;
 import org.apache.hadoop.fs.azure.metrics.AzureFileSystemMetricsSystem;
+import org.apache.hadoop.fs.azure.integration.AzureTestUtils;
 import org.apache.hadoop.metrics2.AbstractMetric;
 import org.apache.hadoop.metrics2.MetricsRecord;
 import org.apache.hadoop.metrics2.MetricsSink;
@@ -529,6 +530,8 @@ public final class AzureBlobStorageTestAccount implements 
AutoCloseable,
 
   static CloudStorageAccount createTestAccount(Configuration conf)
   throws URISyntaxException, KeyProviderException {
+AzureTestUtils.assumeNamespaceDisabled(conf);
+
 String testAccountName = verifyWasbAccountNameInConfig(conf);
 if (testAccountName == null) {
   LOG.warn("Skipping live Azure test because of missing test account");
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/contract/NativeAzureFileSystemContract.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/contract/NativeAzureFileSystemContract.java
index a264aca..ea90a86 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/contract/NativeAzureFileSystemContract.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/contract/NativeAzureFileSystemContract.java
@@ -34,6 +34,7 @@ public class NativeAzureFileSystemContract extends 
AbstractBondedFSContract {
   public NativeAzureFileSystemContract(Configuration conf) {
 super(conf); //insert the base features
 addConfResource(CONTRACT_XML);
+AzureTestUtils.assumeNamespaceDisabled(conf);
   }
 
   @Override
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/integration/AzureTestUtils.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/integration/AzureTestUtils.java
index c46320a..bc19700 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/integration/AzureTestUtils.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/integration/AzureTestUtils.java
@@ -47,6 +47,7 @@ import static org.junit.Assume.assumeTrue;
 import static 
org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount.WASB_ACCOUNT_NAME_DOMAIN_SUFFIX_REGEX;
 import static 
org.apache.hadoop.fs.azure.AzureBlobStorageTestAccount.WASB_TEST_ACCOUNT_NAME_WITH_DOMAIN;
 import static org.apache.hadoop.fs.azure.integration.AzureTestConstants.*;
+import static 
org.apache.hadoop.fs.azurebfs.constants.TestConfigurationKeys.FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT;
 import static org.apache.hadoop.test.MetricsAsserts.getLongCounter;
 import static org.apache.hadoop.test.MetricsAsserts.getLongGauge;
 import static org.apache.hadoop.test.MetricsAsserts.getMetrics;
@@ -545,4 +546,12 @@ public final class AzureTestUtils extends Assert {
 inputStream.close();
 return new String(buffer, 0, count);
   }
+
+  /**
+   * Assume hierarchical namespace is disabled for test account.
+   */
+  public static void assumeNamespaceDisabled(Configuration conf) {
+Assume.assumeFalse("Hierarchical namespace is enabled for test account.",
+conf.getBoolean(FS_AZURE_TEST_NAMESPACE_ENABLED_ACCOUNT, false));
+  }
 }
diff --git a/hadoop-tools/hadoop-azure/src/test/resources/azure-test.xml 

[hadoop] branch branch-3.2 updated: HDFS-14235. Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon. Contributed by Ranith Sardar.

2019-02-20 Thread surendralilhore
This is an automated email from the ASF dual-hosted git repository.

surendralilhore pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new b93b127  HDFS-14235. Handle ArrayIndexOutOfBoundsException in 
DataNodeDiskMetrics#slowDiskDetectionDaemon. Contributed by Ranith Sardar.
b93b127 is described below

commit b93b127956508072904b44098fdc1c0dfc899606
Author: Surendra Singh Lilhore 
AuthorDate: Wed Feb 20 16:56:10 2019 +0530

HDFS-14235. Handle ArrayIndexOutOfBoundsException in 
DataNodeDiskMetrics#slowDiskDetectionDaemon. Contributed by Ranith Sardar.

(cherry picked from commit 41e18feda3f5ff924c87c4bed5b5cbbaecb19ae1)
---
 .../datanode/metrics/DataNodeDiskMetrics.java  | 78 --
 1 file changed, 43 insertions(+), 35 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
index f2954e8..a8a6c85 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
@@ -57,6 +57,10 @@ public class DataNodeDiskMetrics {
   private volatile Map>
   diskOutliersStats = Maps.newHashMap();
 
+  // Adding for test purpose. When addSlowDiskForTesting() called from test
+  // code, status should not be overridden by daemon thread.
+  private boolean overrideStatus = true;
+
   public DataNodeDiskMetrics(DataNode dn, long diskOutlierDetectionIntervalMs) 
{
 this.dn = dn;
 this.detectionInterval = diskOutlierDetectionIntervalMs;
@@ -71,41 +75,43 @@ public class DataNodeDiskMetrics {
   @Override
   public void run() {
 while (shouldRun) {
-  Map metadataOpStats = Maps.newHashMap();
-  Map readIoStats = Maps.newHashMap();
-  Map writeIoStats = Maps.newHashMap();
-  FsDatasetSpi.FsVolumeReferences fsVolumeReferences = null;
-  try {
-fsVolumeReferences = dn.getFSDataset().getFsVolumeReferences();
-Iterator volumeIterator = fsVolumeReferences
-.iterator();
-while (volumeIterator.hasNext()) {
-  FsVolumeSpi volume = volumeIterator.next();
-  DataNodeVolumeMetrics metrics = 
volumeIterator.next().getMetrics();
-  String volumeName = volume.getBaseURI().getPath();
-
-  metadataOpStats.put(volumeName,
-  metrics.getMetadataOperationMean());
-  readIoStats.put(volumeName, metrics.getReadIoMean());
-  writeIoStats.put(volumeName, metrics.getWriteIoMean());
-}
-  } finally {
-if (fsVolumeReferences != null) {
-  try {
-fsVolumeReferences.close();
-  } catch (IOException e) {
-LOG.error("Error in releasing FS Volume references", e);
+  if (dn.getFSDataset() != null) {
+Map metadataOpStats = Maps.newHashMap();
+Map readIoStats = Maps.newHashMap();
+Map writeIoStats = Maps.newHashMap();
+FsDatasetSpi.FsVolumeReferences fsVolumeReferences = null;
+try {
+  fsVolumeReferences = dn.getFSDataset().getFsVolumeReferences();
+  Iterator volumeIterator = fsVolumeReferences
+  .iterator();
+  while (volumeIterator.hasNext()) {
+FsVolumeSpi volume = volumeIterator.next();
+DataNodeVolumeMetrics metrics = volume.getMetrics();
+String volumeName = volume.getBaseURI().getPath();
+
+metadataOpStats.put(volumeName,
+metrics.getMetadataOperationMean());
+readIoStats.put(volumeName, metrics.getReadIoMean());
+writeIoStats.put(volumeName, metrics.getWriteIoMean());
+  }
+} finally {
+  if (fsVolumeReferences != null) {
+try {
+  fsVolumeReferences.close();
+} catch (IOException e) {
+  LOG.error("Error in releasing FS Volume references", e);
+}
   }
 }
-  }
-  if (metadataOpStats.isEmpty() && readIoStats.isEmpty() &&
-  writeIoStats.isEmpty()) {
-LOG.debug("No disk stats available for detecting outliers.");
-return;
-  }
+if (metadataOpStats.isEmpty() && readIoStats.isEmpty()
+&& writeIoStats.isEmpty()) {
+  LOG.debug("No disk stats available for detecting outliers.");
+  continue;
+}
 
-  

[hadoop] branch trunk updated: HDFS-14235. Handle ArrayIndexOutOfBoundsException in DataNodeDiskMetrics#slowDiskDetectionDaemon. Contributed by Ranith Sardar.

2019-02-20 Thread surendralilhore
This is an automated email from the ASF dual-hosted git repository.

surendralilhore pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 41e18fe  HDFS-14235. Handle ArrayIndexOutOfBoundsException in 
DataNodeDiskMetrics#slowDiskDetectionDaemon. Contributed by Ranith Sardar.
41e18fe is described below

commit 41e18feda3f5ff924c87c4bed5b5cbbaecb19ae1
Author: Surendra Singh Lilhore 
AuthorDate: Wed Feb 20 16:56:10 2019 +0530

HDFS-14235. Handle ArrayIndexOutOfBoundsException in 
DataNodeDiskMetrics#slowDiskDetectionDaemon. Contributed by Ranith Sardar.
---
 .../datanode/metrics/DataNodeDiskMetrics.java  | 78 --
 1 file changed, 43 insertions(+), 35 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
index f2954e8..a8a6c85 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
@@ -57,6 +57,10 @@ public class DataNodeDiskMetrics {
   private volatile Map>
   diskOutliersStats = Maps.newHashMap();
 
+  // Adding for test purpose. When addSlowDiskForTesting() called from test
+  // code, status should not be overridden by daemon thread.
+  private boolean overrideStatus = true;
+
   public DataNodeDiskMetrics(DataNode dn, long diskOutlierDetectionIntervalMs) 
{
 this.dn = dn;
 this.detectionInterval = diskOutlierDetectionIntervalMs;
@@ -71,41 +75,43 @@ public class DataNodeDiskMetrics {
   @Override
   public void run() {
 while (shouldRun) {
-  Map metadataOpStats = Maps.newHashMap();
-  Map readIoStats = Maps.newHashMap();
-  Map writeIoStats = Maps.newHashMap();
-  FsDatasetSpi.FsVolumeReferences fsVolumeReferences = null;
-  try {
-fsVolumeReferences = dn.getFSDataset().getFsVolumeReferences();
-Iterator volumeIterator = fsVolumeReferences
-.iterator();
-while (volumeIterator.hasNext()) {
-  FsVolumeSpi volume = volumeIterator.next();
-  DataNodeVolumeMetrics metrics = 
volumeIterator.next().getMetrics();
-  String volumeName = volume.getBaseURI().getPath();
-
-  metadataOpStats.put(volumeName,
-  metrics.getMetadataOperationMean());
-  readIoStats.put(volumeName, metrics.getReadIoMean());
-  writeIoStats.put(volumeName, metrics.getWriteIoMean());
-}
-  } finally {
-if (fsVolumeReferences != null) {
-  try {
-fsVolumeReferences.close();
-  } catch (IOException e) {
-LOG.error("Error in releasing FS Volume references", e);
+  if (dn.getFSDataset() != null) {
+Map metadataOpStats = Maps.newHashMap();
+Map readIoStats = Maps.newHashMap();
+Map writeIoStats = Maps.newHashMap();
+FsDatasetSpi.FsVolumeReferences fsVolumeReferences = null;
+try {
+  fsVolumeReferences = dn.getFSDataset().getFsVolumeReferences();
+  Iterator volumeIterator = fsVolumeReferences
+  .iterator();
+  while (volumeIterator.hasNext()) {
+FsVolumeSpi volume = volumeIterator.next();
+DataNodeVolumeMetrics metrics = volume.getMetrics();
+String volumeName = volume.getBaseURI().getPath();
+
+metadataOpStats.put(volumeName,
+metrics.getMetadataOperationMean());
+readIoStats.put(volumeName, metrics.getReadIoMean());
+writeIoStats.put(volumeName, metrics.getWriteIoMean());
+  }
+} finally {
+  if (fsVolumeReferences != null) {
+try {
+  fsVolumeReferences.close();
+} catch (IOException e) {
+  LOG.error("Error in releasing FS Volume references", e);
+}
   }
 }
-  }
-  if (metadataOpStats.isEmpty() && readIoStats.isEmpty() &&
-  writeIoStats.isEmpty()) {
-LOG.debug("No disk stats available for detecting outliers.");
-return;
-  }
+if (metadataOpStats.isEmpty() && readIoStats.isEmpty()
+&& writeIoStats.isEmpty()) {
+  LOG.debug("No disk stats available for detecting outliers.");
+  continue;
+}
 
-  detectAndUpdateDiskOutliers(metadataOpStats, readIoStats,
-  writeIoStats);
+

[hadoop] branch trunk updated: HDDS-1135. Ozone jars are missing in the Ozone Snapshot tar. Contributed by Dinesh Chitlangia.

2019-02-20 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 642fe6a  HDDS-1135. Ozone jars are missing in the Ozone Snapshot tar. 
Contributed by Dinesh Chitlangia.
642fe6a is described below

commit 642fe6a2604c107070476b45aeab6cce09dfef1f
Author: Márton Elek 
AuthorDate: Wed Feb 20 12:08:24 2019 +0100

HDDS-1135. Ozone jars are missing in the Ozone Snapshot tar. Contributed by 
Dinesh Chitlangia.
---
 hadoop-ozone/dist/pom.xml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hadoop-ozone/dist/pom.xml b/hadoop-ozone/dist/pom.xml
index 0182da4..e66bbbe 100644
--- a/hadoop-ozone/dist/pom.xml
+++ b/hadoop-ozone/dist/pom.xml
@@ -38,7 +38,7 @@
 
   
 copy-classpath-files
-package
+prepare-package
 
   copy
 
@@ -108,7 +108,7 @@
   
   
 copy-jars
-package
+prepare-package
 
   copy-dependencies
 
@@ -126,7 +126,7 @@
 
   
 dist
-prepare-package
+compile
 
   exec
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org