[hadoop] branch trunk updated: HADOOP-18538. Upgrade kafka to 2.8.2 (#5164)

2022-12-06 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2e880962664 HADOOP-18538. Upgrade kafka to 2.8.2 (#5164)
2e880962664 is described below

commit 2e88096266419662b76d4ddd7073a1dce234d79c
Author: Murali Krishna 
AuthorDate: Tue Dec 6 22:27:46 2022 +0530

HADOOP-18538. Upgrade kafka to 2.8.2 (#5164)

Signed-off-by: Brahma Reddy Battula 
---
 LICENSE-binary | 2 +-
 hadoop-project/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 988e38fa390..c4aa63df880 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -324,7 +324,7 @@ org.apache.htrace:htrace-core:3.1.0-incubating
 org.apache.htrace:htrace-core4:4.1.0-incubating
 org.apache.httpcomponents:httpclient:4.5.6
 org.apache.httpcomponents:httpcore:4.4.10
-org.apache.kafka:kafka-clients:2.8.1
+org.apache.kafka:kafka-clients:2.8.2
 org.apache.kerby:kerb-admin:2.0.2
 org.apache.kerby:kerb-client:2.0.2
 org.apache.kerby:kerb-common:2.0.2
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 5c2fad15779..17df3f14497 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -50,7 +50,7 @@
 
 2.12.2
 
-2.8.1
+2.8.2
 
 1.0.13
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (42c8f61fecd -> 832d0e0d76c)

2022-09-08 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 42c8f61fecd HADOOP-18441. Remove hadoop custom 
ServicesResourceTransformer (#4850). Contributed by PJ Fanning.
 add 832d0e0d76c HADOOP-18443. Upgrade snakeyaml to 1.31 to mitigate 
CVE-2022-25857 (#4856)

No new revisions were added by this update.

Summary of changes:
 LICENSE-binary | 2 +-
 hadoop-project/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-16572. Fix typo in readme of hadoop-project-dist

2022-05-08 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a394c2b031b HDFS-16572. Fix typo in readme of hadoop-project-dist
a394c2b031b is described below

commit a394c2b031be61c2bfb1e2e92cbe43db1acb05ca
Author: Gautham B A 
AuthorDate: Sun May 8 23:47:13 2022 +0530

HDFS-16572. Fix typo in readme of hadoop-project-dist
---
 hadoop-project-dist/README.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project-dist/README.txt b/hadoop-project-dist/README.txt
index a4c759720fb..922c94aadb3 100644
--- a/hadoop-project-dist/README.txt
+++ b/hadoop-project-dist/README.txt
@@ -1,4 +1,4 @@
 DUMMY.
 
 Required for the assembly:single goal not to fail because there
-are not files in the hadoop-project-dist module.
+are no files in the hadoop-project-dist module.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2.3 updated: HADOOP-16905. Update jackson-databind to 2.10.3 to relieve us from the endless CVE patches (#3748)

2021-12-16 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.2.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2.3 by this push:
 new a7a8529  HADOOP-16905. Update jackson-databind to 2.10.3 to relieve us 
from the endless CVE patches (#3748)
a7a8529 is described below

commit a7a85292adffd25eae92635e1c1001f79845c6d4
Author: Akira Ajisaka 
AuthorDate: Fri Dec 10 16:24:06 2021 +0900

HADOOP-16905. Update jackson-databind to 2.10.3 to relieve us from the 
endless CVE patches (#3748)

(cherry picked from commit 69faaa1d58ad7de18a8dfa477531653a2c061568)

 Conflicts:
hadoop-project/pom.xml

(cherry picked from commit bc6874139f534af81a83523cf10508d3d16a032f)
---
 hadoop-client-modules/hadoop-client-runtime/pom.xml | 7 +++
 hadoop-project/pom.xml  | 4 ++--
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/hadoop-client-modules/hadoop-client-runtime/pom.xml 
b/hadoop-client-modules/hadoop-client-runtime/pom.xml
index 65b9e78..a44d850 100644
--- a/hadoop-client-modules/hadoop-client-runtime/pom.xml
+++ b/hadoop-client-modules/hadoop-client-runtime/pom.xml
@@ -335,6 +335,13 @@
   
 
 
+  javax/xml/bind/
+  
${shaded.dependency.prefix}.javax.xml.bind.
+  
+**/pom.xml
+  
+
+
   net/
   
${shaded.dependency.prefix}.net.
   
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 4e28c22..8a4dcc7 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -69,8 +69,8 @@
 
 
 1.9.13
-2.9.10
-2.9.10.4
+2.10.3
+2.10.3
 
 
 4.5.13

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-16364. Remove unnecessary brackets in NameNodeRpcServer#L453 (#3742)

2021-12-03 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0cb6c28  HDFS-16364. Remove unnecessary brackets in 
NameNodeRpcServer#L453 (#3742)
0cb6c28 is described below

commit 0cb6c28d192e8e0132c4719d658fb4f1a42d5a5f
Author: wangzhaohui <32935220+wzhallri...@users.noreply.github.com>
AuthorDate: Fri Dec 3 19:21:04 2021 +0800

HDFS-16364. Remove unnecessary brackets in NameNodeRpcServer#L453 (#3742)
---
 .../java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
index 474b5e2..c0cb578 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
@@ -450,7 +450,7 @@ public class NameNodeRpcServer implements NamenodeProtocols 
{
 
 GlobalStateIdContext stateIdContext = null;
 if (enableStateContext) {
-  stateIdContext = new GlobalStateIdContext((namesystem));
+  stateIdContext = new GlobalStateIdContext(namesystem);
 }
 
 clientRpcServer = new RPC.Builder(conf)

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2.3 updated: HADOOP-17236. Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640. Contributed by Brahma Reddy Battula.

2021-10-08 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.2.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2.3 by this push:
 new b55002f  HADOOP-17236. Bump up snakeyaml to 1.26 to mitigate 
CVE-2017-18640. Contributed by Brahma Reddy Battula.
b55002f is described below

commit b55002f6e6cd643884f65e2e3ce5216e83524f4b
Author: Brahma Reddy Battula 
AuthorDate: Wed Oct 28 09:26:52 2020 -0700

HADOOP-17236. Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640. 
Contributed by Brahma Reddy Battula.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit eb84793af1e48db05ab827d0cf09963a430615ed)
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index a31954c..c1c8d38 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -180,7 +180,7 @@
 ${hadoop.version}
 
 1.5.4
-1.16
+1.26
 1.4.8
 2.0.2
 4.13.2

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-17236. Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640. Contributed by Brahma Reddy Battula.

2021-10-08 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 675712a  HADOOP-17236. Bump up snakeyaml to 1.26 to mitigate 
CVE-2017-18640. Contributed by Brahma Reddy Battula.
675712a is described below

commit 675712afa34ebed7eb2ad2881d69a5540beb18bc
Author: Brahma Reddy Battula 
AuthorDate: Wed Oct 28 09:26:52 2020 -0700

HADOOP-17236. Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640. 
Contributed by Brahma Reddy Battula.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit eb84793af1e48db05ab827d0cf09963a430615ed)
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 6314807..388e2a6 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -180,7 +180,7 @@
 ${hadoop.version}
 
 1.5.4
-1.16
+1.26
 1.4.8
 2.0.2
 4.13.2

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/01: Preparing for 3.2.3 release.

2021-08-08 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.2.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 182167a9fb3ba76c3a48d3d2205787301a00aa13
Author: Brahma Reddy Battula 
AuthorDate: Sun Aug 8 19:17:14 2021 +0530

Preparing for 3.2.3 release.
---
 hadoop-assemblies/pom.xml  |  4 ++--
 hadoop-build-tools/pom.xml |  2 +-
 hadoop-client-modules/hadoop-client-api/pom.xml|  4 ++--
 hadoop-client-modules/hadoop-client-check-invariants/pom.xml   |  4 ++--
 .../hadoop-client-check-test-invariants/pom.xml|  4 ++--
 hadoop-client-modules/hadoop-client-integration-tests/pom.xml  |  4 ++--
 hadoop-client-modules/hadoop-client-minicluster/pom.xml|  4 ++--
 hadoop-client-modules/hadoop-client-runtime/pom.xml|  4 ++--
 hadoop-client-modules/hadoop-client/pom.xml|  4 ++--
 hadoop-client-modules/pom.xml  |  2 +-
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml  |  4 ++--
 hadoop-cloud-storage-project/pom.xml   |  4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml   |  4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml |  4 ++--
 hadoop-common-project/hadoop-auth/pom.xml  |  4 ++--
 hadoop-common-project/hadoop-common/pom.xml|  4 ++--
 hadoop-common-project/hadoop-kms/pom.xml   |  4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml   |  4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml   |  4 ++--
 hadoop-common-project/pom.xml  |  4 ++--
 hadoop-dist/pom.xml|  4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml |  4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml |  4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml  |  4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml|  4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml|  4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml|  4 ++--
 hadoop-hdfs-project/pom.xml|  4 ++--
 .../hadoop-mapreduce-client-app/pom.xml|  4 ++--
 .../hadoop-mapreduce-client-common/pom.xml |  4 ++--
 .../hadoop-mapreduce-client-core/pom.xml   |  4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml |  4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml |  4 ++--
 .../hadoop-mapreduce-client-jobclient/pom.xml  |  4 ++--
 .../hadoop-mapreduce-client-nativetask/pom.xml |  4 ++--
 .../hadoop-mapreduce-client-shuffle/pom.xml|  4 ++--
 .../hadoop-mapreduce-client-uploader/pom.xml   |  4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml   |  4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml |  4 ++--
 hadoop-mapreduce-project/pom.xml   |  4 ++--
 hadoop-maven-plugins/pom.xml   |  2 +-
 hadoop-minicluster/pom.xml |  4 ++--
 hadoop-project-dist/pom.xml|  4 ++--
 hadoop-project/pom.xml |  6 +++---
 hadoop-tools/hadoop-aliyun/pom.xml |  2 +-
 hadoop-tools/hadoop-archive-logs/pom.xml   |  4 ++--
 hadoop-tools/hadoop-archives/pom.xml   |  4 ++--
 hadoop-tools/hadoop-aws/pom.xml|  4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml |  2 +-
 hadoop-tools/hadoop-azure/pom.xml  |  2 +-
 hadoop-tools/hadoop-datajoin/pom.xml   |  4 ++--
 hadoop-tools/hadoop-distcp/pom.xml |  4 ++--
 hadoop-tools/hadoop-extras/pom.xml |  4 ++--
 hadoop-tools/hadoop-fs2img/pom.xml |  4 ++--
 hadoop-tools/hadoop-gridmix/pom.xml|  4 ++--
 hadoop-tools/hadoop-kafka/pom.xml  |  4 ++--
 hadoop-tools/hadoop-openstack/pom.xml  |  4 ++--
 hadoop-tools/hadoop-pipes/pom.xml  |  4 ++--
 hadoop-tools/hadoop-resourceestimator/pom.xml  |  2 +-
 hadoop-tools/hadoop-rumen/pom.xml  |  4 ++--
 hadoop-tools/hadoop-sls/pom.xml|  4 ++--
 hadoop-tools/hadoop-streaming/pom.xml  |  4 ++--
 hadoop-tools/hadoop-tools-dist/pom.xml |  4 ++--
 hadoop-tools

[hadoop] branch branch-3.2.3 created (now 182167a)

2021-08-08 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch branch-3.2.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 182167a  Preparing for 3.2.3 release.

This branch includes the following new commits:

 new 182167a  Preparing for 3.2.3 release.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: Preparing for 3.2.4 development.

2021-08-08 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 3cf2479  Preparing for 3.2.4 development.
3cf2479 is described below

commit 3cf2479c04316bf6281e384575738b79427f958a
Author: Brahma Reddy Battula 
AuthorDate: Sun Aug 8 18:14:12 2021 +0530

Preparing for 3.2.4 development.
---
 hadoop-assemblies/pom.xml  |  4 ++--
 hadoop-build-tools/pom.xml |  2 +-
 hadoop-client-modules/hadoop-client-api/pom.xml|  4 ++--
 hadoop-client-modules/hadoop-client-check-invariants/pom.xml   |  4 ++--
 .../hadoop-client-check-test-invariants/pom.xml|  4 ++--
 hadoop-client-modules/hadoop-client-integration-tests/pom.xml  |  4 ++--
 hadoop-client-modules/hadoop-client-minicluster/pom.xml|  4 ++--
 hadoop-client-modules/hadoop-client-runtime/pom.xml|  4 ++--
 hadoop-client-modules/hadoop-client/pom.xml|  4 ++--
 hadoop-client-modules/pom.xml  |  2 +-
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml  |  4 ++--
 hadoop-cloud-storage-project/pom.xml   |  4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml   |  4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml |  4 ++--
 hadoop-common-project/hadoop-auth/pom.xml  |  4 ++--
 hadoop-common-project/hadoop-common/pom.xml|  4 ++--
 hadoop-common-project/hadoop-kms/pom.xml   |  4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml   |  4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml   |  4 ++--
 hadoop-common-project/pom.xml  |  4 ++--
 hadoop-dist/pom.xml|  4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml |  4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml |  4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml  |  4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml|  4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml|  4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml|  4 ++--
 hadoop-hdfs-project/pom.xml|  4 ++--
 .../hadoop-mapreduce-client-app/pom.xml|  4 ++--
 .../hadoop-mapreduce-client-common/pom.xml |  4 ++--
 .../hadoop-mapreduce-client-core/pom.xml   |  4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml |  4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml |  4 ++--
 .../hadoop-mapreduce-client-jobclient/pom.xml  |  4 ++--
 .../hadoop-mapreduce-client-nativetask/pom.xml |  4 ++--
 .../hadoop-mapreduce-client-shuffle/pom.xml|  4 ++--
 .../hadoop-mapreduce-client-uploader/pom.xml   |  4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml   |  4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml |  4 ++--
 hadoop-mapreduce-project/pom.xml   |  4 ++--
 hadoop-maven-plugins/pom.xml   |  2 +-
 hadoop-minicluster/pom.xml |  4 ++--
 hadoop-project-dist/pom.xml|  4 ++--
 hadoop-project/pom.xml |  6 +++---
 hadoop-tools/hadoop-aliyun/pom.xml |  2 +-
 hadoop-tools/hadoop-archive-logs/pom.xml   |  4 ++--
 hadoop-tools/hadoop-archives/pom.xml   |  4 ++--
 hadoop-tools/hadoop-aws/pom.xml|  4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml |  2 +-
 hadoop-tools/hadoop-azure/pom.xml  |  2 +-
 hadoop-tools/hadoop-datajoin/pom.xml   |  4 ++--
 hadoop-tools/hadoop-distcp/pom.xml |  4 ++--
 hadoop-tools/hadoop-extras/pom.xml |  4 ++--
 hadoop-tools/hadoop-fs2img/pom.xml |  4 ++--
 hadoop-tools/hadoop-gridmix/pom.xml|  4 ++--
 hadoop-tools/hadoop-kafka/pom.xml  |  4 ++--
 hadoop-tools/hadoop-openstack/pom.xml  |  4 ++--
 hadoop-tools/hadoop-pipes/pom.xml  |  4 ++--
 hadoop-tools/hadoop-resourceestimator/pom.xml  |  2 +-
 hadoop-tools/hadoop-rumen/pom.xml  |  4 ++--
 hadoop-tools/hadoop-sls/pom.xml|  4

[hadoop] branch branch-3.2 updated: HADOOP-17840: Backport HADOOP-17837 to branch-3.2 (#3275)

2021-08-06 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new dab8290  HADOOP-17840: Backport HADOOP-17837 to branch-3.2 (#3275)
dab8290 is described below

commit dab829063d9fb41d80331c6396309616facbb258
Author: Bryan Beaudreault 
AuthorDate: Sat Aug 7 00:19:46 2021 -0400

HADOOP-17840: Backport HADOOP-17837 to branch-3.2 (#3275)

Reviewed-by: Brahma Reddy Battula 
---
 .../hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java | 2 +-
 .../hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index d2c6065..9ded0f4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -537,7 +537,7 @@ public class NetUtils {
 } catch (SocketTimeoutException ste) {
   throw new ConnectTimeoutException(ste.getMessage());
 }  catch (UnresolvedAddressException uae) {
-  throw new UnknownHostException(uae.getMessage());
+  throw new UnknownHostException(endpoint.toString());
 }
 
 // There is a very rare case allowed by the TCP specification, such that
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
index fb91ff6..b6eb5cb 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.security.KerberosAuthException;
 import org.apache.hadoop.security.NetUtilsTestResolver;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.BeforeClass;
@@ -111,6 +112,7 @@ public class TestNetUtils {
   fail("Should not have connected");
 } catch (UnknownHostException uhe) {
   LOG.info("Got exception: ", uhe);
+  GenericTestUtils.assertExceptionContains("invalid-test-host:0", uhe);
 }
   }
 

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17837: Add unresolved endpoint value to UnknownHostException (ADDENDUM) (#3276)

2021-08-06 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 2fda130  HADOOP-17837: Add unresolved endpoint value to 
UnknownHostException (ADDENDUM) (#3276)
2fda130 is described below

commit 2fda1302600641571b52ea5e4fd3e8b9cca81785
Author: Bryan Beaudreault 
AuthorDate: Fri Aug 6 12:24:07 2021 -0400

HADOOP-17837: Add unresolved endpoint value to UnknownHostException 
(ADDENDUM) (#3276)

(cherry picked from commit b0b867e977ab853d1dfc434195c486cf0ca32dab)
---
 .../src/test/java/org/apache/hadoop/net/TestNetUtils.java  | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
index c21932c..0bf2c44 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.security.KerberosAuthException;
 import org.apache.hadoop.security.NetUtilsTestResolver;
+import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.BeforeClass;
@@ -111,7 +112,7 @@ public class TestNetUtils {
   fail("Should not have connected");
 } catch (UnknownHostException uhe) {
   LOG.info("Got exception: ", uhe);
-  assertEquals("invalid-test-host:0", uhe.getMessage());
+  GenericTestUtils.assertExceptionContains("invalid-test-host:0", uhe);
 }
   }
 

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (e85c446 -> b0b867e)

2021-08-06 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from e85c446  HDFS-16154. TestMiniJournalCluster failing intermittently 
because of not reseting UserGroupInformation completely (#3270)
 add b0b867e  HADOOP-17837: Add unresolved endpoint value to 
UnknownHostException (ADDENDUM) (#3276)

No new revisions were added by this update.

Summary of changes:
 .../src/test/java/org/apache/hadoop/net/TestNetUtils.java  | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HADOOP-17800 updated: HADOOP-12430. Addendum to the HADOOP-12430.

2021-08-04 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HADOOP-17800 by this push:
 new 7118db5  HADOOP-12430. Addendum to the HADOOP-12430.
7118db5 is described below

commit 7118db5ee3836cbc0dae194843e2d95c401ced77
Author: Brahma Reddy Battula 
AuthorDate: Wed Aug 4 17:59:17 2021 +0530

HADOOP-12430. Addendum to the HADOOP-12430.
---
 .../hdfs/util/TestIPv6FormatCompatibility.java | 205 +
 1 file changed, 205 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestIPv6FormatCompatibility.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestIPv6FormatCompatibility.java
new file mode 100644
index 000..ce26a1e
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestIPv6FormatCompatibility.java
@@ -0,0 +1,205 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.util;
+
+import org.apache.hadoop.thirdparty.com.google.common.net.InetAddresses;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hdfs.net.Peer;
+import org.apache.hadoop.hdfs.protocol.DatanodeID;
+import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil;
+import org.apache.hadoop.net.unix.DomainSocket;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.Inet4Address;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.SocketAddress;
+import java.nio.channels.ReadableByteChannel;
+
+import static org.junit.Assert.*;
+
+/**
+ * This is a very basic, very fast test to test IPv6 parsing issues
+ * as we find them.
+ * It does NOT depend on having a working IPv6 stack and should
+ * succeed even if run
+ * with "-Djava.net.preferIPv4Stack=true"
+ */
+public class TestIPv6FormatCompatibility {
+  private static final String IPV6_LOOPBACK_LONG_STRING = "0:0:0:0:0:0:0:1";
+  private static final String IPV6_SAMPLE_ADDRESS =
+  "2a03:2880:2130:cf05:face:b00c:0:1";
+  private static final String IPV6_LOOPBACK_SHORT_STRING = "::1";
+  private static final String IPV4_LOOPBACK_WITH_PORT = "127.0.0.1:10";
+  private static final String IPV6_LOOPBACK_WITH_PORT =
+  "[" + IPV6_LOOPBACK_LONG_STRING + "]:10";
+  private static final String IPV6_SAMPLE_WITH_PORT =
+  "[" + IPV6_SAMPLE_ADDRESS + "]:10";
+  private static final InetAddress IPV6LOOPBACK =
+  InetAddresses.forString(IPV6_LOOPBACK_LONG_STRING);
+  private static final InetAddress IPV4LOOPBACK =
+  Inet4Address.getLoopbackAddress();
+  private static final InetAddress IPV6SAMPLE =
+  InetAddresses.forString(IPV6_SAMPLE_ADDRESS);
+  private static final String IPV4_LOOPBACK_STRING =
+  IPV4LOOPBACK.getHostAddress();
+
+  private static final Log LOG =
+  LogFactory.getLog(TestIPv6FormatCompatibility.class);
+
+  // HDFS-8078 : note that we're expecting URI-style
+  // (see Javadoc for java.net.URI or rfc2732)
+  @Test public void testDatanodeIDXferAddressAddsBrackets() {
+DatanodeID ipv4localhost =
+new DatanodeID(IPV4_LOOPBACK_STRING, "localhost", "no-uuid", 10, 20, 
30,
+40);
+DatanodeID ipv6localhost =
+new DatanodeID(IPV6_LOOPBACK_LONG_STRING, "localhost", "no-uuid", 10,
+20, 30, 40);
+DatanodeID ipv6sample =
+new DatanodeID(IPV6_SAMPLE_ADDRESS, "ipv6.example.com", "no-uuid", 10,
+20, 30, 40);
+assertEquals("IPv6 should have brackets added", IPV6_LOOPBACK_WITH_PORT,
+ipv6localhost.getXferAddr(false));
+assertEquals("IPv6 should have brackets added", IPV6_SAMPLE_WITH_PORT,
+ipv6sample.getXferAddr(false));
+assertEquals("IPv4 should not have brackets added", 
IPV4_

[hadoop] branch HADOOP-17800 updated: YARN-4283. Avoid unsafe split and append on fields that might be IPv6 literals. Contributed by Nemanja Matkovic and Hemanth Boyina

2021-08-04 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HADOOP-17800 by this push:
 new c0c70e0  YARN-4283. Avoid unsafe split and append on fields that might 
be IPv6 literals. Contributed by  Nemanja Matkovic and Hemanth Boyina
c0c70e0 is described below

commit c0c70e08338b47caf2fd84fad88d1381fe92998c
Author: Brahma Reddy Battula 
AuthorDate: Wed Aug 4 17:49:52 2021 +0530

YARN-4283. Avoid unsafe split and append on fields that might be IPv6 
literals. Contributed by  Nemanja Matkovic and Hemanth Boyina
---
 .../org/apache/hadoop/yarn/api/records/NodeId.java | 16 +++--
 .../apache/hadoop/yarn/util/ConverterUtils.java| 12 ++--
 .../hadoop/yarn/webapp/util/WebAppUtils.java   | 19 +++--
 .../hadoop/yarn/conf/TestYarnConfiguration.java| 84 --
 .../hadoop/yarn/util/TestConverterUtils.java   | 16 -
 .../org/apache/hadoop/yarn/lib/TestZKClient.java   | 23 +++---
 .../containermanager/ContainerManagerImpl.java |  3 +-
 .../server/resourcemanager/ResourceManager.java|  5 +-
 .../hadoop/yarn/server/resourcemanager/MockNM.java |  7 +-
 .../hadoop/yarn/server/webproxy/WebAppProxy.java   |  7 +-
 .../webproxy/amfilter/AmFilterInitializer.java |  6 +-
 .../server/webproxy/TestWebAppProxyServlet.java|  4 +-
 12 files changed, 119 insertions(+), 83 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeId.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeId.java
index a0b87a7..9080905 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeId.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeId.java
@@ -23,6 +23,7 @@ import 
org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.classification.InterfaceStability.Stable;
 import org.apache.hadoop.yarn.util.Records;
+import com.google.common.net.HostAndPort;
 
 /**
  * NodeId is the unique identifier for a node.
@@ -116,17 +117,18 @@ public abstract class NodeId implements 
Comparable {
   @Public
   @Stable
   public static NodeId fromString(String nodeIdStr) {
-String[] parts = nodeIdStr.split(":");
-if (parts.length != 2) {
-  throw new IllegalArgumentException("Invalid NodeId [" + nodeIdStr
-  + "]. Expected host:port");
+HostAndPort hp = HostAndPort.fromString(nodeIdStr);
+if (!hp.hasPort()) {
+  throw new IllegalArgumentException(
+  "Invalid NodeId [" + nodeIdStr + "]. Expected host:port");
 }
 try {
-  NodeId nodeId =
-  NodeId.newInstance(parts[0].trim(), Integer.parseInt(parts[1]));
+  String hostPortStr = hp.toString();
+  String host = hostPortStr.substring(0, hostPortStr.lastIndexOf(":"));
+  NodeId nodeId = NodeId.newInstance(host, hp.getPort());
   return nodeId;
 } catch (NumberFormatException e) {
-  throw new IllegalArgumentException("Invalid port: " + parts[1], e);
+  throw new IllegalArgumentException("Invalid port: " + hp.getPort(), e);
 }
   }
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
index 67bc2b7..4ef8510 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ConverterUtils.java
@@ -36,7 +36,7 @@ import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.URL;
 import org.apache.hadoop.yarn.factories.RecordFactory;
-
+import com.google.common.net.HostAndPort;
 
 /**
  * This class contains a set of utilities which help converting data structures
@@ -114,11 +114,11 @@ public class ConverterUtils {
 
   @Private
   @InterfaceStability.Unstable
-  public static NodeId toNodeIdWithDefaultPort(String nodeIdStr) {
-if (nodeIdStr.indexOf(":") < 0) {
-  return NodeId.fromString(nodeIdStr + ":0");
-}
-return NodeId.fromString(nodeIdStr);
+  public static NodeId toNodeIdWithDefaultPort(
+  String nodeIdStr) {
+HostAndPort hp = HostAndPort.fromString(nodeIdStr);
+hp = hp.withDefaultPort(0);
+return toNodeId(hp.toString());
   }
 
   /*
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-commo

[hadoop] branch HADOOP-17800 updated: HADOOP-12670. Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only. Contributed by Elliott Neil Clark And Hemanth Boyina.

2021-07-31 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HADOOP-17800 by this push:
 new 3133386  HADOOP-12670. Fix TestNetUtils and TestSecurityUtil when 
localhost is ipv6 only. Contributed by  Elliott Neil Clark And Hemanth Boyina.
3133386 is described below

commit 3133386ac480ce21300737621c80f1e81e902bc9
Author: Brahma Reddy Battula 
AuthorDate: Sat Jul 31 22:19:19 2021 +0530

HADOOP-12670. Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 
only. Contributed by  Elliott Neil Clark And Hemanth Boyina.
---
 .../main/java/org/apache/hadoop/net/NetUtils.java  |  2 +-
 .../org/apache/hadoop/security/SecurityUtil.java   |  9 +++
 .../authorize/DefaultImpersonationProvider.java|  4 +--
 .../java/org/apache/hadoop/net/TestNetUtils.java   | 31 +++---
 .../hadoop/security/TestDoAsEffectiveUser.java |  3 ---
 .../apache/hadoop/security/TestSecurityUtil.java   | 25 ++---
 6 files changed, 42 insertions(+), 32 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index 245c182..6d58ee4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -768,7 +768,7 @@ public class NetUtils {
 if (InetAddressUtils.isIPv6Address(hostName)) {
   return "[" + hostName + "]:" + addr.getPort();
 }
-return hostName + ":" + addr.getPort();
+return hostName.toLowerCase() + ":" + addr.getPort();
   }
   /**
* Compose a "ip:port" string from the InetSocketAddress.
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
index 59383df..5602201 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
@@ -445,7 +445,7 @@ public final class SecurityUtil {
 if (token != null) {
   token.setService(service);
   if (LOG.isDebugEnabled()) {
-LOG.debug("Acquired token "+token);  // Token#toString() prints service
+LOG.debug("Acquired token " + token);  // Token#toString() prints 
service
   }
 } else {
   LOG.warn("Failed to get token for service "+service);
@@ -459,18 +459,15 @@ public final class SecurityUtil {
*  hadoop.security.token.service.use_ip
*/
   public static Text buildTokenService(InetSocketAddress addr) {
-String host = null;
 if (useIpForTokenService) {
   if (addr.isUnresolved()) { // host has no ip address
 throw new IllegalArgumentException(
 new UnknownHostException(addr.getHostName())
 );
   }
-  host = addr.getAddress().getHostAddress();
-} else {
-  host = StringUtils.toLowerCase(addr.getHostName());
+  return new Text(NetUtils.getIPPortString(addr));
 }
-return new Text(host + ":" + addr.getPort());
+return new Text(NetUtils.getHostPortString(addr));
   }
 
   /**
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java
index f258930..5ac613b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java
@@ -125,10 +125,10 @@ public class DefaultImpersonationProvider implements 
ImpersonationProvider {
   + " is not allowed to impersonate " + user.getUserName());
 }
 
-MachineList MachineList = proxyHosts.get(
+MachineList machineList = proxyHosts.get(
 getProxySuperuserIpConfKey(realUser.getShortUserName()));
 
-if(MachineList == null || !MachineList.includes(remoteAddress)) {
+if(machineList == null || !machineList.includes(remoteAddress)) {
   throw new AuthorizationException("Unauthorized connection for 
super-user: "
   + realUser.getUserName() + " from IP " + remoteAddress);
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
index 625d551..f8599

[hadoop] branch HADOOP-17800 updated: MAPREDUCE-6519. Avoid unsafe split and append on fields that might be IPv6 literals. Contributed by Nemanja Matkovic And Hemanth Boyina.

2021-07-31 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HADOOP-17800 by this push:
 new 2c9b22f  MAPREDUCE-6519. Avoid unsafe split and append on fields that 
might be IPv6 literals. Contributed by Nemanja Matkovic And Hemanth Boyina.
2c9b22f is described below

commit 2c9b22f15cdb8b9498a82846d3b4a48d93ef7d17
Author: Brahma Reddy Battula 
AuthorDate: Sat Jul 31 22:09:26 2021 +0530

MAPREDUCE-6519. Avoid unsafe split and append on fields that might be IPv6 
literals. Contributed by Nemanja Matkovic And Hemanth Boyina.
---
 .../org/apache/hadoop/mapred/FileInputFormat.java   | 21 +++--
 .../org/apache/hadoop/mapreduce/util/HostUtil.java  |  6 ++
 .../mapreduce/v2/hs/HistoryClientService.java   | 11 +++
 .../apache/hadoop/ipc/TestMRCJCSocketFactory.java   |  8 +---
 .../org/apache/hadoop/mapred/ReliabilityTest.java   | 12 ++--
 .../apache/hadoop/mapred/TestClientRedirect.java|  8 +---
 .../org/apache/hadoop/mapred/UtilsForTests.java | 16 
 .../hadoop/mapreduce/MiniHadoopClusterManager.java  |  5 +++--
 8 files changed, 35 insertions(+), 52 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
index 91151f0..d18a722 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
@@ -43,6 +43,7 @@ import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.mapreduce.security.TokenCache;
 import org.apache.hadoop.net.NetworkTopology;
 import org.apache.hadoop.net.Node;
+import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.net.NodeBase;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.StopWatch;
@@ -712,19 +713,19 @@ public abstract class FileInputFormat implements 
InputFormat {
   
   private String[] identifyHosts(int replicationFactor, 
  Map racksMap) {
-
+
 String [] retVal = new String[replicationFactor];
-   
-List  rackList = new LinkedList(); 
+
+List  rackList = new LinkedList();
 
 rackList.addAll(racksMap.values());
-
+
 // Sort the racks based on their contribution to this split
 sortInDescendingOrder(rackList);
 
 boolean done = false;
 int index = 0;
-
+
 // Get the host list for all our aggregated items, sort
 // them and return the top entries
 for (NodeInfo ni: rackList) {
@@ -733,27 +734,27 @@ public abstract class FileInputFormat implements 
InputFormat {
 
   ListhostList = new LinkedList();
   hostList.addAll(hostSet);
-
+
   // Sort the hosts in this rack based on their contribution
   sortInDescendingOrder(hostList);
 
   for (NodeInfo host: hostList) {
 // Strip out the port number from the host name
-retVal[index++] = host.node.getName().split(":")[0];
+retVal[index++] = NetUtils.getHostFromHostPort(host.node.getName());
 if (index == replicationFactor) {
   done = true;
   break;
 }
   }
-  
+
   if (done == true) {
 break;
   }
 }
 return retVal;
   }
-  
-  private String[] fakeRacks(BlockLocation[] blkLocations, int index) 
+
+  private String[] fakeRacks(BlockLocation[] blkLocations, int index)
   throws IOException {
 String[] allHosts = blkLocations[index].getHosts();
 String[] allTopos = new String[allHosts.length];
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/HostUtil.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/HostUtil.java
index ad279ee..1ba4387 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/HostUtil.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/HostUtil.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.mapreduce.util;
 
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.net.NetUtils;
 
 @Private
 @Unstable
@@ -56,10 +57,7 @@ public class HostUtil {
   public static String convertTrackerNameToHostName(String t

[hadoop] 03/05: HADOOP-12491. Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals. Contributed by Nemanja Matkovic And Hemanth Boyina

2021-07-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 809cca765f323e97859e8225769169966eeed4fb
Author: Brahma Reddy Battula 
AuthorDate: Fri Jul 30 08:20:21 2021 +0530

HADOOP-12491. Hadoop-common - Avoid unsafe split and append on fields that 
might be IPv6 literals. Contributed by Nemanja Matkovic And Hemanth Boyina
---
 .../hadoop-common/src/main/conf/hadoop-env.sh  |  3 +-
 .../java/org/apache/hadoop/conf/Configuration.java |  2 +-
 .../hadoop/crypto/key/kms/KMSClientProvider.java   | 13 ++--
 .../main/java/org/apache/hadoop/ipc/Client.java| 16 +++--
 .../src/main/java/org/apache/hadoop/net/DNS.java   | 69 +++-
 .../main/java/org/apache/hadoop/net/NetUtils.java  |  4 +-
 .../org/apache/hadoop/net/SocksSocketFactory.java  | 18 +++--
 .../org/apache/hadoop/ha/ClientBaseWithFixes.java  | 76 +-
 .../test/java/org/apache/hadoop/net/TestDNS.java   | 17 +
 .../java/org/apache/hadoop/net/TestNetUtils.java   | 28 
 10 files changed, 161 insertions(+), 85 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh 
b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
index f4625f5..1473386 100644
--- a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
+++ b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
@@ -85,8 +85,7 @@
 # Kerberos security.
 # export HADOOP_JAAS_DEBUG=true
 
-# Extra Java runtime options for all Hadoop commands. We don't support
-# IPv6 yet/still, so by default the preference is set to IPv4.
+# Extra Java runtime options for all Hadoop commands.
 # export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
 # For Kerberos debugging, an extended option set logs more information
 # export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true 
-Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index e4e36a2..9088648 100755
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -2562,7 +2562,7 @@ public class Configuration implements 
Iterable>,
   return updateConnectAddr(addressProperty, addr);
 }
 
-final String connectHost = connectHostPort.split(":")[0];
+final String connectHost = NetUtils.getHostFromHostPort(connectHostPort);
 // Create connect address using client address hostname and server port.
 return updateConnectAddr(addressProperty, NetUtils.createSocketAddrForHost(
 connectHost, addr.getPort()));
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index bc56f0e..9244318 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -82,6 +82,7 @@ import com.fasterxml.jackson.databind.ObjectMapper;
 import 
org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions;
 import org.apache.hadoop.thirdparty.com.google.common.base.Strings;
+import org.apache.hadoop.thirdparty.com.google.common.net.HostAndPort;
 
 import static org.apache.hadoop.util.KMSUtil.checkNotEmpty;
 import static org.apache.hadoop.util.KMSUtil.checkNotNull;
@@ -290,16 +291,20 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
 // In the current scheme, all hosts have to run on the same port
 int port = -1;
 String hostsPart = authority;
+
 if (authority.contains(":")) {
-  String[] t = authority.split(":");
   try {
-port = Integer.parseInt(t[1]);
-  } catch (Exception e) {
+HostAndPort hp = HostAndPort.fromString(hostsPart);
+if (hp.hasPort()) {
+  port = hp.getPort();
+  hostsPart = hp.getHost();
+}
+  } catch (IllegalArgumentException e) {
 throw new IOException(
 "Could not parse port in kms uri [" + origUrl + "]");
   }
-  hostsPart = t[0];
 }
+
 KMSClientProvider[] providers =
 createProviders(conf, origUrl, port, hostsPart);
 return new LoadBalancingKMSClientProvider(providerUri, providers, 
conf);
diff --git 
a/hadoop-common-

[hadoop] 04/05: HADOOP-12432. Add support for include/exclude lists on IPv6 setup. Contributed by Nemanja Matkovic And Hemanth Boyina.

2021-07-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b30674140bc58b68760879fc9534e55eb8743753
Author: Brahma Reddy Battula 
AuthorDate: Fri Jul 30 08:31:31 2021 +0530

HADOOP-12432. Add support for include/exclude lists on IPv6 setup. 
Contributed by Nemanja Matkovic And Hemanth Boyina.
---
 .../server/blockmanagement/HostFileManager.java|  9 ++--
 .../blockmanagement/TestHostFileManager.java   | 49 +++---
 .../hdfs/server/namenode/TestHostsFiles.java   |  9 ++--
 .../apache/hadoop/hdfs/util/HostsFileWriter.java   | 11 +++--
 4 files changed, 49 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
index 57b6902..dcbd131 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
@@ -23,12 +23,11 @@ import org.slf4j.LoggerFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.protocol.DatanodeID;
+import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.util.HostsFileReader;
 
 import java.io.IOException;
 import java.net.InetSocketAddress;
-import java.net.URI;
-import java.net.URISyntaxException;
 import java.util.HashSet;
 
 /**
@@ -89,16 +88,14 @@ public class HostFileManager extends HostConfigManager {
   @VisibleForTesting
   static InetSocketAddress parseEntry(String type, String fn, String line) {
 try {
-  URI uri = new URI("dummy", line, null, null, null);
-  int port = uri.getPort() == -1 ? 0 : uri.getPort();
-  InetSocketAddress addr = new InetSocketAddress(uri.getHost(), port);
+  InetSocketAddress addr = NetUtils.createSocketAddr(line, 0);
   if (addr.isUnresolved()) {
 LOG.warn(String.format("Failed to resolve address `%s` in `%s`. " +
 "Ignoring in the %s list.", line, fn, type));
 return null;
   }
   return addr;
-} catch (URISyntaxException e) {
+} catch (IllegalArgumentException e) {
   LOG.warn(String.format("Failed to parse `%s` in `%s`. " + "Ignoring in " 
+
   "the %s list.", line, fn, type));
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHostFileManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHostFileManager.java
index 38d0905..2139ac5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHostFileManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHostFileManager.java
@@ -110,13 +110,19 @@ public class TestHostFileManager {
 includedNodes.add(entry("127.0.0.1:12345"));
 includedNodes.add(entry("localhost:12345"));
 includedNodes.add(entry("127.0.0.1:12345"));
+
+includedNodes.add(entry("[::1]:42"));
+includedNodes.add(entry("[0:0:0:0:0:0:0:1]:42"));
+includedNodes.add(entry("[::1]:42"));
+
 includedNodes.add(entry("127.0.0.2"));
 
 excludedNodes.add(entry("127.0.0.1:12346"));
 excludedNodes.add(entry("127.0.30.1:12346"));
+excludedNodes.add(entry("[::1]:24"));
 
-Assert.assertEquals(2, includedNodes.size());
-Assert.assertEquals(2, excludedNodes.size());
+Assert.assertEquals(3, includedNodes.size());
+Assert.assertEquals(3, excludedNodes.size());
 
 hm.refresh(includedNodes, excludedNodes);
 
@@ -125,20 +131,33 @@ public class TestHostFileManager {
 Map dnMap = (Map) Whitebox.getInternalState(dm, "datanodeMap");
 
-// After the de-duplication, there should be only one DN from the included
+// After the de-duplication, there should be three DN from the included
 // nodes declared as dead.
-Assert.assertEquals(2, dm.getDatanodeListForReport(HdfsConstants
-.DatanodeReportType.ALL).size());
-Assert.assertEquals(2, dm.getDatanodeListForReport(HdfsConstants
-.DatanodeReportType.DEAD).size());
-dnMap.put("uuid-foo", new DatanodeDescriptor(new DatanodeID("127.0.0.1",
-"localhost", "uuid-foo", 12345, 1020, 1021, 1022)));
-Assert.assertEquals(1, dm.getDatanodeListForReport(HdfsConstants
-.DatanodeReportType.DEAD).size());
-dnMap.put("uuid-bar", new 

[hadoop] 05/05: HDFS-9266.Avoid unsafe split and append on fields that might be IPv6 literals. Contributed by Nemanja Matkovic And Hemanth Boyina.

2021-07-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit eaad6531800b926e8f09e4dc7bf4b637010a3d35
Author: Brahma Reddy Battula 
AuthorDate: Fri Jul 30 08:39:51 2021 +0530

HDFS-9266.Avoid unsafe split and append on fields that might be IPv6 
literals. Contributed by Nemanja Matkovic And Hemanth Boyina.
---
 .../hdfs/client/impl/BlockReaderFactory.java   |   3 +-
 .../org/apache/hadoop/hdfs/web/JsonUtilClient.java |   2 +-
 .../hdfs/qjournal/client/IPCLoggerChannel.java |   5 +-
 .../server/blockmanagement/DatanodeManager.java|   9 +-
 .../server/datanode/BlockPoolSliceStorage.java |  16 ++-
 .../hadoop/hdfs/server/datanode/DataXceiver.java   |   2 +-
 .../hadoop/hdfs/server/namenode/Checkpointer.java  |   6 +-
 .../hadoop/hdfs/server/namenode/NNStorage.java |   5 +
 .../web/resources/NamenodeWebHdfsMethods.java  |  15 +--
 .../java/org/apache/hadoop/hdfs/tools/GetConf.java |   3 +-
 .../tools/offlineImageViewer/WebImageViewer.java   |   5 +-
 .../apache/hadoop/hdfs/TestDFSAddressConfig.java   |  10 +-
 .../java/org/apache/hadoop/hdfs/TestDFSUtil.java   |  49 ++--
 .../org/apache/hadoop/hdfs/TestFileAppend.java |   7 +-
 .../org/apache/hadoop/hdfs/TestFileCreation.java   | 137 +++--
 .../hdfs/client/impl/BlockReaderTestUtil.java  |   3 +-
 .../qjournal/client/TestQuorumJournalManager.java  |   6 +-
 .../server/datanode/TestBlockPoolSliceStorage.java |  28 -
 .../namenode/TestNameNodeRespectsBindHostKeys.java |  79 ++--
 .../server/namenode/TestNameNodeRpcServer.java |  12 +-
 20 files changed, 274 insertions(+), 128 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
index f9fd2b1..70545c3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
@@ -66,6 +66,7 @@ import 
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitShm.Slot;
 import org.apache.hadoop.hdfs.shortcircuit.ShortCircuitShm.SlotId;
 import org.apache.hadoop.hdfs.util.IOUtilsClient;
 import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.net.unix.DomainSocket;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -876,6 +877,6 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
*/
   public static String getFileName(final InetSocketAddress s,
   final String poolId, final long blockId) {
-return s.toString() + ":" + poolId + ":" + blockId;
+return NetUtils.getSocketAddressString(s) + ":" + poolId + ":" + blockId;
   }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
index 6acd062..e302605 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
@@ -305,7 +305,7 @@ public class JsonUtilClient {
 if (ipAddr == null) {
   String name = getString(m, "name", null);
   if (name != null) {
-int colonIdx = name.indexOf(':');
+int colonIdx = name.lastIndexOf(':');
 if (colonIdx > 0) {
   ipAddr = name.substring(0, colonIdx);
   xferPort = Integer.parseInt(name.substring(colonIdx +1));
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
index 9908160..e695790 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
@@ -54,12 +54,12 @@ import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
 import org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest;
 import org.apache.hadoop.ipc.ProtobufRpcEngine2;
 import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.util.StopWatch;
 
 import 
org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.thi

[hadoop] 02/05: HADOOP-12430. Fix HDFS client gets errors trying to to connect to IPv6 DataNode. Contributed by Nate Edel.

2021-07-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ddecfe1524b72e35c5483133d6e60ffc7418927c
Author: Brahma Reddy Battula 
AuthorDate: Mon Jul 26 17:18:55 2021 +0530

HADOOP-12430. Fix HDFS client gets errors trying to to connect to IPv6 
DataNode. Contributed by Nate Edel.
---
 .../main/java/org/apache/hadoop/net/NetUtils.java  | 160 +++--
 .../java/org/apache/hadoop/net/TestNetUtils.java   |   8 +-
 .../apache/hadoop/hdfs/protocol/DatanodeID.java|  14 +-
 .../datatransfer/sasl/DataTransferSaslUtil.java|   9 +-
 4 files changed, 162 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index 0f4dd9d..49fa540 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -40,7 +40,6 @@ import java.nio.channels.SocketChannel;
 import java.nio.channels.UnresolvedAddressException;
 import java.util.Map.Entry;
 import java.util.concurrent.TimeUnit;
-import java.util.regex.Pattern;
 import java.util.*;
 import java.util.concurrent.ConcurrentHashMap;
 
@@ -61,6 +60,11 @@ import org.apache.hadoop.ipc.VersionedProtocol;
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.util.ReflectionUtils;
 
+import com.google.common.net.HostAndPort;
+import com.google.common.net.InetAddresses;
+import org.apache.http.conn.util.InetAddressUtils;
+import java.net.*;
+
 import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -70,7 +74,7 @@ import org.slf4j.LoggerFactory;
 public class NetUtils {
   private static final Logger LOG = LoggerFactory.getLogger(NetUtils.class);
   
-  private static Map hostToResolved = 
+  private static Map hostToResolved =
  new HashMap();
   /** text to point users elsewhere: {@value} */
   private static final String FOR_MORE_DETAILS_SEE
@@ -669,9 +673,6 @@ public class NetUtils {
 }
   }
 
-  private static final Pattern ipPortPattern = // Pattern for matching 
ip[:port]
-Pattern.compile("\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}(:\\d+)?");
-  
   /**
* Attempt to obtain the host name of the given string which contains
* an IP address and an optional port.
@@ -680,16 +681,26 @@ public class NetUtils {
* @return Host name or null if the name can not be determined
*/
   public static String getHostNameOfIP(String ipPort) {
-if (null == ipPort || !ipPortPattern.matcher(ipPort).matches()) {
+String ip = null;
+if (null == ipPort || ipPort.isEmpty()) {
   return null;
 }
-
 try {
-  int colonIdx = ipPort.indexOf(':');
-  String ip = (-1 == colonIdx) ? ipPort
-  : ipPort.substring(0, ipPort.indexOf(':'));
+  HostAndPort hostAndPort = HostAndPort.fromString(ipPort);
+  ip = hostAndPort.getHost();
+  if (!InetAddresses.isInetAddress(ip)) {
+return null;
+  }
+} catch (IllegalArgumentException e) {
+  LOG.debug("getHostNameOfIP: '" + ipPort
+  + "' is not a valid IP address or IP/Port pair.", e);
+  return null;
+}
+
+try {
   return InetAddress.getByName(ip).getHostName();
 } catch (UnknownHostException e) {
+  LOG.trace("getHostNameOfIP: '"+ipPort+"' name not resolved.", e);
   return null;
 }
   }
@@ -702,8 +713,20 @@ public class NetUtils {
* @return host:port
*/
   public static String normalizeIP2HostName(String ipPort) {
-if (null == ipPort || !ipPortPattern.matcher(ipPort).matches()) {
-  return ipPort;
+String ip = null;
+if (null == ipPort || ipPort.isEmpty()) {
+  return null;
+}
+try {
+  HostAndPort hostAndPort = HostAndPort.fromString(ipPort);
+  ip = hostAndPort.getHost();
+  if (!InetAddresses.isInetAddress(ip)) {
+return null;
+  }
+} catch (IllegalArgumentException e) {
+  LOG.debug("getHostNameOfIP: '" + ipPort
+  + "' is not a valid IP address or IP/Port pair.", e);
+  return null;
 }
 
 InetSocketAddress address = createSocketAddr(ipPort);
@@ -735,11 +758,88 @@ public class NetUtils {
 
   /**
* Compose a "host:port" string from the address.
+   *
+   * Note that this preferentially returns the host name if available; if the
+   * IP address is desired, use getIPPortString(); if both are desired as in
+   * InetSocketAddress.toString, use getSocketAddressString()
*/
   public static String getHostPortString(InetSocketAddress addr) {
-return ad

[hadoop] 01/05: HADOOP-11630. Allow hadoop.sh to bind to ipv6 conditionally. Contributed by Elliott Clark.

2021-07-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f293a2ff71634eb8d587775980432e4d2b1e4ab6
Author: Brahma Reddy Battula 
AuthorDate: Tue Jul 20 19:39:42 2021 +0530

HADOOP-11630. Allow hadoop.sh to bind to ipv6 conditionally. Contributed by 
Elliott Clark.
---
 .../hadoop-common/src/main/bin/hadoop-functions.sh | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
index c4c3157..fd07f59 100755
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
@@ -619,7 +619,12 @@ function hadoop_bootstrap
   export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
 
   # defaults
-  export HADOOP_OPTS=${HADOOP_OPTS:-"-Djava.net.preferIPv4Stack=true"}
+  # shellcheck disable=SC2154
+  if [[ "${HADOOP_ALLOW_IPV6}" -ne "yes" ]]; then
+export HADOOP_OPTS=${HADOOP_OPTS:-"-Djava.net.preferIPv4Stack=true"}
+  else
+export HADOOP_OPTS=${HADOOP_OPTS:-""}
+  fi
   hadoop_debug "Initial HADOOP_OPTS=${HADOOP_OPTS}"
 }
 

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HADOOP-17800 updated (62e77a5 -> eaad653)

2021-07-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 discard 62e77a5  HDFS-9266.Avoid unsafe split and append on fields that might 
be IPv6 literals. Contributed by Nemanja Matkovic And Hemanth Boyina.
 discard e28e1cb  HADOOP-12432. Add support for include/exclude lists on IPv6 
setup. Contributed by Nemanja Matkovic And Hemanth Boyina.
 discard da87cba  HADOOP-12491. Hadoop-common - Avoid unsafe split and append 
on fields that might be IPv6 literals. Contributed by Nemanja Matkovic And 
Hemanth Boyina
 discard 36b8ed1  HADOOP-12430. Fix HDFS client gets errors trying to to 
connect to IPv6 DataNode. Contributed by Nate Edel.
 discard 904c6ec  HADOOP-11630. Allow hadoop.sh to bind to ipv6 conditionally. 
Contributed by Elliott Clark.
 add 97c88c9  HADOOP-17807. Use separate src dir for platform builds (#3210)
 add b038042  HDFS-16139. Update BPServiceActor Scheduler's 
nextBlockReportTime atomically (#3228). Contributed by Viraj Jasani.
 add f813554  HADOOP-13887. Support S3 client side encryption (S3-CSE) 
using AWS-SDK (#2706)
 add fa0289b  YARN-6221. Entities missing from ATS when summary log file 
info got returned to the ATS before the domain log. Contributed by Xiaomin Zhang
 add aecfcf1  HDFS-16119. start balancer with parameters 
-hotBlockTimeInterval xxx is invalid. (#3185)
 add 10ba4cc  HADOOP-17765. ABFS: Use Unique File Paths in Tests. (#3153)
 add ae20516  HDFS-16111. Add a configuration to 
RoundRobinVolumeChoosingPolicy to avoid failed volumes at datanodes. (#3175)
 add b4a5247  YARN-9551. TestTimelineClientV2Impl.testSyncCall fails 
intermittent (#3212)
 add dac10fc  HDFS-16145. CopyListing fails with FNF exception with 
snapshot diff. (#3234)
 add fd13970  HDFS-16137.Improve the comments related to 
FairCallQueue#queues. (#3226)
 add 8d0297c  YARN-10727. ParentQueue does not validate the queue on 
removal. Contributed by Andras Gyori
 add 4eae284  HDFS-16144. Revert HDFS-15372 (Files in snapshots no longer 
see attribute provider permissions). Contributed by Stephen O'Donnell
 add b19dae8  HADOOP-17817. S3A to raise IOE if both S3-CSE and S3Guard 
enabled (#3239)
 add 1b9efe5  YARN-10790. CS Flexible AQC: Add separate parent and leaf 
template property. Contributed by Andras Gyori
 add f2b6c03  YARN-6272. 
TestAMRMClient#testAMRMClientWithContainerResourceChange fails intermittently. 
Contributed by Andras Gyory & Prabhu Joseph
 add e001f8e  HADOOP-17814. Provide fallbacks for identity/cost providers 
and backoff enable (#3230)
 add 1d03c69  HADOOP-17811: ABFS ExponentialRetryPolicy doesn't pick up 
configuration values (#3221)
 add 3c8a48e  HADOOP-17819. Add extensions to ProtobufRpcEngine 
RequestHeaderProto. Contributed by Hector Sandoval Chaverri. (#3242)
 add 683feaa  HDFS-15175. Multiple CloseOp shared block instance causes the 
standby namenode to crash when rolling editlog. Contributed by Wan Chang.
 add 6f730fd  HDFS-15936.Solve SocketTimeoutException#sendPacket() does not 
record SocketTimeout exception. (#2836)
 add d78b300  YARN-10841. Fix token reset synchronization for UAM response 
token. (#3194)
 add 54f9fff  YARN-10628. Add node usage metrics in SLS. Contributed by 
Vadaga Ananyo Rao
 add 74770c8  YARN-10663. Add runningApps stats in SLS. Contributed by 
Vadaga Ananyo Rao
 add ac0a4e7  YARN-10869. CS considers only the default 
maximum-allocation-mb/vcore property as a maximum when it creates dynamic 
queues (#3225)
 add 8f750c5  YARN-10856. Prevent ATS v2 health check REST API call if the 
ATS service itself is disabled. (#3236)
 add 13467f4  HADOOP-17815. Run CI for Centos 7 (#3231)
 new f293a2f  HADOOP-11630. Allow hadoop.sh to bind to ipv6 conditionally. 
Contributed by Elliott Clark.
 new ddecfe1  HADOOP-12430. Fix HDFS client gets errors trying to to 
connect to IPv6 DataNode. Contributed by Nate Edel.
 new 809cca7  HADOOP-12491. Hadoop-common - Avoid unsafe split and append 
on fields that might be IPv6 literals. Contributed by Nemanja Matkovic And 
Hemanth Boyina
 new b306741  HADOOP-12432. Add support for include/exclude lists on IPv6 
setup. Contributed by Nemanja Matkovic And Hemanth Boyina.
 new eaad653  HDFS-9266.Avoid unsafe split and append on fields that might 
be IPv6 literals. Contributed by Nemanja Matkovic And Hemanth Boyina.

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (62e77a5)
\
 N -- N -- N   refs/heads/HADOOP-17800 (eaad653)

You should already have received notification emails for all of the O
revis

[hadoop] branch HADOOP-17800 updated: HDFS-9266.Avoid unsafe split and append on fields that might be IPv6 literals. Contributed by Nemanja Matkovic And Hemanth Boyina.

2021-07-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HADOOP-17800 by this push:
 new 62e77a5  HDFS-9266.Avoid unsafe split and append on fields that might 
be IPv6 literals. Contributed by Nemanja Matkovic And Hemanth Boyina.
62e77a5 is described below

commit 62e77a5bc13358b3a6b9092f8f3f4d5c556e11b1
Author: Brahma Reddy Battula 
AuthorDate: Fri Jul 30 08:39:51 2021 +0530

HDFS-9266.Avoid unsafe split and append on fields that might be IPv6 
literals. Contributed by Nemanja Matkovic And Hemanth Boyina.
---
 .../hdfs/client/impl/BlockReaderFactory.java   |   3 +-
 .../org/apache/hadoop/hdfs/web/JsonUtilClient.java |   2 +-
 .../hdfs/qjournal/client/IPCLoggerChannel.java |   5 +-
 .../server/blockmanagement/DatanodeManager.java|   9 +-
 .../server/datanode/BlockPoolSliceStorage.java |  16 ++-
 .../hadoop/hdfs/server/datanode/DataXceiver.java   |   2 +-
 .../hadoop/hdfs/server/namenode/Checkpointer.java  |   6 +-
 .../hadoop/hdfs/server/namenode/NNStorage.java |   5 +
 .../web/resources/NamenodeWebHdfsMethods.java  |  15 +--
 .../java/org/apache/hadoop/hdfs/tools/GetConf.java |   3 +-
 .../tools/offlineImageViewer/WebImageViewer.java   |   5 +-
 .../apache/hadoop/hdfs/TestDFSAddressConfig.java   |  10 +-
 .../java/org/apache/hadoop/hdfs/TestDFSUtil.java   |  49 ++--
 .../org/apache/hadoop/hdfs/TestFileAppend.java |   7 +-
 .../org/apache/hadoop/hdfs/TestFileCreation.java   | 137 +++--
 .../hdfs/client/impl/BlockReaderTestUtil.java  |   3 +-
 .../qjournal/client/TestQuorumJournalManager.java  |   6 +-
 .../server/datanode/TestBlockPoolSliceStorage.java |  28 -
 .../namenode/TestNameNodeRespectsBindHostKeys.java |  79 ++--
 .../server/namenode/TestNameNodeRpcServer.java |  12 +-
 20 files changed, 274 insertions(+), 128 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
index f9fd2b1..70545c3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/BlockReaderFactory.java
@@ -66,6 +66,7 @@ import 
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitShm.Slot;
 import org.apache.hadoop.hdfs.shortcircuit.ShortCircuitShm.SlotId;
 import org.apache.hadoop.hdfs.util.IOUtilsClient;
 import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.net.unix.DomainSocket;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -876,6 +877,6 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
*/
   public static String getFileName(final InetSocketAddress s,
   final String poolId, final long blockId) {
-return s.toString() + ":" + poolId + ":" + blockId;
+return NetUtils.getSocketAddressString(s) + ":" + poolId + ":" + blockId;
   }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
index 6acd062..e302605 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
@@ -305,7 +305,7 @@ public class JsonUtilClient {
 if (ipAddr == null) {
   String name = getString(m, "name", null);
   if (name != null) {
-int colonIdx = name.indexOf(':');
+int colonIdx = name.lastIndexOf(':');
 if (colonIdx > 0) {
   ipAddr = name.substring(0, colonIdx);
   xferPort = Integer.parseInt(name.substring(colonIdx +1));
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
index 9908160..e695790 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/IPCLoggerChannel.java
@@ -54,12 +54,12 @@ import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
 import org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest;
 import org.apache.hadoop.ipc.ProtobufRpcEngine2;
 import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.net.NetU

[hadoop] branch HADOOP-17800 updated: HADOOP-12432. Add support for include/exclude lists on IPv6 setup. Contributed by Nemanja Matkovic And Hemanth Boyina.

2021-07-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HADOOP-17800 by this push:
 new e28e1cb  HADOOP-12432. Add support for include/exclude lists on IPv6 
setup. Contributed by Nemanja Matkovic And Hemanth Boyina.
e28e1cb is described below

commit e28e1cbf5c2ad14226c669eb4f4dba83aab858be
Author: Brahma Reddy Battula 
AuthorDate: Fri Jul 30 08:31:31 2021 +0530

HADOOP-12432. Add support for include/exclude lists on IPv6 setup. 
Contributed by Nemanja Matkovic And Hemanth Boyina.
---
 .../server/blockmanagement/HostFileManager.java|  9 ++--
 .../blockmanagement/TestHostFileManager.java   | 49 +++---
 .../hdfs/server/namenode/TestHostsFiles.java   |  9 ++--
 .../apache/hadoop/hdfs/util/HostsFileWriter.java   | 11 +++--
 4 files changed, 49 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
index 57b6902..dcbd131 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
@@ -23,12 +23,11 @@ import org.slf4j.LoggerFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.protocol.DatanodeID;
+import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.util.HostsFileReader;
 
 import java.io.IOException;
 import java.net.InetSocketAddress;
-import java.net.URI;
-import java.net.URISyntaxException;
 import java.util.HashSet;
 
 /**
@@ -89,16 +88,14 @@ public class HostFileManager extends HostConfigManager {
   @VisibleForTesting
   static InetSocketAddress parseEntry(String type, String fn, String line) {
 try {
-  URI uri = new URI("dummy", line, null, null, null);
-  int port = uri.getPort() == -1 ? 0 : uri.getPort();
-  InetSocketAddress addr = new InetSocketAddress(uri.getHost(), port);
+  InetSocketAddress addr = NetUtils.createSocketAddr(line, 0);
   if (addr.isUnresolved()) {
 LOG.warn(String.format("Failed to resolve address `%s` in `%s`. " +
 "Ignoring in the %s list.", line, fn, type));
 return null;
   }
   return addr;
-} catch (URISyntaxException e) {
+} catch (IllegalArgumentException e) {
   LOG.warn(String.format("Failed to parse `%s` in `%s`. " + "Ignoring in " 
+
   "the %s list.", line, fn, type));
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHostFileManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHostFileManager.java
index 38d0905..2139ac5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHostFileManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHostFileManager.java
@@ -110,13 +110,19 @@ public class TestHostFileManager {
 includedNodes.add(entry("127.0.0.1:12345"));
 includedNodes.add(entry("localhost:12345"));
 includedNodes.add(entry("127.0.0.1:12345"));
+
+includedNodes.add(entry("[::1]:42"));
+includedNodes.add(entry("[0:0:0:0:0:0:0:1]:42"));
+includedNodes.add(entry("[::1]:42"));
+
 includedNodes.add(entry("127.0.0.2"));
 
 excludedNodes.add(entry("127.0.0.1:12346"));
 excludedNodes.add(entry("127.0.30.1:12346"));
+excludedNodes.add(entry("[::1]:24"));
 
-Assert.assertEquals(2, includedNodes.size());
-Assert.assertEquals(2, excludedNodes.size());
+Assert.assertEquals(3, includedNodes.size());
+Assert.assertEquals(3, excludedNodes.size());
 
 hm.refresh(includedNodes, excludedNodes);
 
@@ -125,20 +131,33 @@ public class TestHostFileManager {
 Map dnMap = (Map) Whitebox.getInternalState(dm, "datanodeMap");
 
-// After the de-duplication, there should be only one DN from the included
+// After the de-duplication, there should be three DN from the included
 // nodes declared as dead.
-Assert.assertEquals(2, dm.getDatanodeListForReport(HdfsConstants
-.DatanodeReportType.ALL).size());
-Assert.assertEquals(2, dm.getDatanodeListForReport(HdfsConstants
-.DatanodeReportType.DEAD).size());
-dnMap.put("uuid-foo", new DatanodeDescriptor(new DatanodeID("127.0.

[hadoop] branch HADOOP-17800 updated: HADOOP-12491. Hadoop-common - Avoid unsafe split and append on fields that might be IPv6 literals. Contributed by Nemanja Matkovic And Hemanth Boyina

2021-07-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HADOOP-17800 by this push:
 new da87cba  HADOOP-12491. Hadoop-common - Avoid unsafe split and append 
on fields that might be IPv6 literals. Contributed by Nemanja Matkovic And 
Hemanth Boyina
da87cba is described below

commit da87cba7cbbe366d17e24ca65c6d3cdf7a1a7b8d
Author: Brahma Reddy Battula 
AuthorDate: Fri Jul 30 08:20:21 2021 +0530

HADOOP-12491. Hadoop-common - Avoid unsafe split and append on fields that 
might be IPv6 literals. Contributed by Nemanja Matkovic And Hemanth Boyina
---
 .../hadoop-common/src/main/conf/hadoop-env.sh  |  3 +-
 .../java/org/apache/hadoop/conf/Configuration.java |  2 +-
 .../hadoop/crypto/key/kms/KMSClientProvider.java   | 13 ++--
 .../main/java/org/apache/hadoop/ipc/Client.java| 16 +++--
 .../src/main/java/org/apache/hadoop/net/DNS.java   | 69 +++-
 .../main/java/org/apache/hadoop/net/NetUtils.java  |  4 +-
 .../org/apache/hadoop/net/SocksSocketFactory.java  | 18 +++--
 .../org/apache/hadoop/ha/ClientBaseWithFixes.java  | 76 +-
 .../test/java/org/apache/hadoop/net/TestDNS.java   | 17 +
 .../java/org/apache/hadoop/net/TestNetUtils.java   | 28 
 10 files changed, 161 insertions(+), 85 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh 
b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
index f4625f5..1473386 100644
--- a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
+++ b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
@@ -85,8 +85,7 @@
 # Kerberos security.
 # export HADOOP_JAAS_DEBUG=true
 
-# Extra Java runtime options for all Hadoop commands. We don't support
-# IPv6 yet/still, so by default the preference is set to IPv4.
+# Extra Java runtime options for all Hadoop commands.
 # export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
 # For Kerberos debugging, an extended option set logs more information
 # export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true 
-Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index e4e36a2..9088648 100755
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -2562,7 +2562,7 @@ public class Configuration implements 
Iterable>,
   return updateConnectAddr(addressProperty, addr);
 }
 
-final String connectHost = connectHostPort.split(":")[0];
+final String connectHost = NetUtils.getHostFromHostPort(connectHostPort);
 // Create connect address using client address hostname and server port.
 return updateConnectAddr(addressProperty, NetUtils.createSocketAddrForHost(
 connectHost, addr.getPort()));
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index bc56f0e..9244318 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -82,6 +82,7 @@ import com.fasterxml.jackson.databind.ObjectMapper;
 import 
org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions;
 import org.apache.hadoop.thirdparty.com.google.common.base.Strings;
+import org.apache.hadoop.thirdparty.com.google.common.net.HostAndPort;
 
 import static org.apache.hadoop.util.KMSUtil.checkNotEmpty;
 import static org.apache.hadoop.util.KMSUtil.checkNotNull;
@@ -290,16 +291,20 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
 // In the current scheme, all hosts have to run on the same port
 int port = -1;
 String hostsPart = authority;
+
 if (authority.contains(":")) {
-  String[] t = authority.split(":");
   try {
-port = Integer.parseInt(t[1]);
-  } catch (Exception e) {
+HostAndPort hp = HostAndPort.fromString(hostsPart);
+if (hp.hasPort()) {
+  port = hp.getPort();
+  hostsPart = hp.getHost();
+}
+  } catch (IllegalArgumentException e) {
 throw new IOException(
 "Could not parse port in kms uri [&

[hadoop] 02/02: HADOOP-12430. Fix HDFS client gets errors trying to to connect to IPv6 DataNode. Contributed by Nate Edel.

2021-07-26 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 36b8ed12a8502bdecc6b6ca16538d321969a5432
Author: Brahma Reddy Battula 
AuthorDate: Mon Jul 26 17:18:55 2021 +0530

HADOOP-12430. Fix HDFS client gets errors trying to to connect to IPv6 
DataNode. Contributed by Nate Edel.
---
 .../main/java/org/apache/hadoop/net/NetUtils.java  | 160 +++--
 .../java/org/apache/hadoop/net/TestNetUtils.java   |   8 +-
 .../apache/hadoop/hdfs/protocol/DatanodeID.java|  14 +-
 .../datatransfer/sasl/DataTransferSaslUtil.java|   9 +-
 4 files changed, 162 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index 0f4dd9d..49fa540 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -40,7 +40,6 @@ import java.nio.channels.SocketChannel;
 import java.nio.channels.UnresolvedAddressException;
 import java.util.Map.Entry;
 import java.util.concurrent.TimeUnit;
-import java.util.regex.Pattern;
 import java.util.*;
 import java.util.concurrent.ConcurrentHashMap;
 
@@ -61,6 +60,11 @@ import org.apache.hadoop.ipc.VersionedProtocol;
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.util.ReflectionUtils;
 
+import com.google.common.net.HostAndPort;
+import com.google.common.net.InetAddresses;
+import org.apache.http.conn.util.InetAddressUtils;
+import java.net.*;
+
 import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -70,7 +74,7 @@ import org.slf4j.LoggerFactory;
 public class NetUtils {
   private static final Logger LOG = LoggerFactory.getLogger(NetUtils.class);
   
-  private static Map hostToResolved = 
+  private static Map hostToResolved =
  new HashMap();
   /** text to point users elsewhere: {@value} */
   private static final String FOR_MORE_DETAILS_SEE
@@ -669,9 +673,6 @@ public class NetUtils {
 }
   }
 
-  private static final Pattern ipPortPattern = // Pattern for matching 
ip[:port]
-Pattern.compile("\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}(:\\d+)?");
-  
   /**
* Attempt to obtain the host name of the given string which contains
* an IP address and an optional port.
@@ -680,16 +681,26 @@ public class NetUtils {
* @return Host name or null if the name can not be determined
*/
   public static String getHostNameOfIP(String ipPort) {
-if (null == ipPort || !ipPortPattern.matcher(ipPort).matches()) {
+String ip = null;
+if (null == ipPort || ipPort.isEmpty()) {
   return null;
 }
-
 try {
-  int colonIdx = ipPort.indexOf(':');
-  String ip = (-1 == colonIdx) ? ipPort
-  : ipPort.substring(0, ipPort.indexOf(':'));
+  HostAndPort hostAndPort = HostAndPort.fromString(ipPort);
+  ip = hostAndPort.getHost();
+  if (!InetAddresses.isInetAddress(ip)) {
+return null;
+  }
+} catch (IllegalArgumentException e) {
+  LOG.debug("getHostNameOfIP: '" + ipPort
+  + "' is not a valid IP address or IP/Port pair.", e);
+  return null;
+}
+
+try {
   return InetAddress.getByName(ip).getHostName();
 } catch (UnknownHostException e) {
+  LOG.trace("getHostNameOfIP: '"+ipPort+"' name not resolved.", e);
   return null;
 }
   }
@@ -702,8 +713,20 @@ public class NetUtils {
* @return host:port
*/
   public static String normalizeIP2HostName(String ipPort) {
-if (null == ipPort || !ipPortPattern.matcher(ipPort).matches()) {
-  return ipPort;
+String ip = null;
+if (null == ipPort || ipPort.isEmpty()) {
+  return null;
+}
+try {
+  HostAndPort hostAndPort = HostAndPort.fromString(ipPort);
+  ip = hostAndPort.getHost();
+  if (!InetAddresses.isInetAddress(ip)) {
+return null;
+  }
+} catch (IllegalArgumentException e) {
+  LOG.debug("getHostNameOfIP: '" + ipPort
+  + "' is not a valid IP address or IP/Port pair.", e);
+  return null;
 }
 
 InetSocketAddress address = createSocketAddr(ipPort);
@@ -735,11 +758,88 @@ public class NetUtils {
 
   /**
* Compose a "host:port" string from the address.
+   *
+   * Note that this preferentially returns the host name if available; if the
+   * IP address is desired, use getIPPortString(); if both are desired as in
+   * InetSocketAddress.toString, use getSocketAddressString()
*/
   public static String getHostPortString(InetSocketAddress addr) {
-return ad

[hadoop] 01/02: HADOOP-11630. Allow hadoop.sh to bind to ipv6 conditionally. Contributed by Elliott Clark.

2021-07-26 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 904c6ec5044f4b8553a9ac4ec0dda056e0d9e795
Author: Brahma Reddy Battula 
AuthorDate: Tue Jul 20 19:39:42 2021 +0530

HADOOP-11630. Allow hadoop.sh to bind to ipv6 conditionally. Contributed by 
Elliott Clark.
---
 .../hadoop-common/src/main/bin/hadoop-functions.sh | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
index c4c3157..fd07f59 100755
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
@@ -619,7 +619,12 @@ function hadoop_bootstrap
   export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
 
   # defaults
-  export HADOOP_OPTS=${HADOOP_OPTS:-"-Djava.net.preferIPv4Stack=true"}
+  # shellcheck disable=SC2154
+  if [[ "${HADOOP_ALLOW_IPV6}" -ne "yes" ]]; then
+export HADOOP_OPTS=${HADOOP_OPTS:-"-Djava.net.preferIPv4Stack=true"}
+  else
+export HADOOP_OPTS=${HADOOP_OPTS:-""}
+  fi
   hadoop_debug "Initial HADOOP_OPTS=${HADOOP_OPTS}"
 }
 

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HADOOP-17800 updated (1f1c38b -> 36b8ed1)

2021-07-26 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 discard 1f1c38b  HADOOP-11630. Allow hadoop.sh to bind to ipv6 conditionally. 
Contributed by Elliott Clark.
 add de41ce8  HDFS-16087. Fix stuck issue in rbfbalance tool (#3141).  
Contributed by Eric Yin.
 add e634bf3  YARN-10630. [UI2] Ambiguous queue name resolution (#3214)
 add 0441efe  YARN-10860. Make max container per heartbeat configs 
refreshable. Contributed by Eric Badger.
 add dbd255f  HADOOP-17796. Upgrade jetty version to 9.4.43 (#3208)
 add 2da9b95  YARN-10657. We should make max application per queue to 
support node label. Contributed by Andras Gyori.
 add 98412ce  HADOOP-17813. Checkstyle - Allow line length: 100
 add 3a52bfc  HADOOP-17808. ipc.Client to set interrupt flag after catching 
InterruptedException (#3219)
 add aa1a5dd  YARN-10829. Support getApplications API in 
FederationClientInterceptor (#3135)
 add 63dfd84  HADOOP-17458. S3A to treat "SdkClientException: Data read has 
a different length than the expected" as EOFException (#3040)
 add 05b6a1a  YARN-10833. Set the X-FRAME-OPTIONS header for the default 
contexts. (#3203)
 add 4c35466  HADOOP-17317. [JDK 11] Upgrade dnsjava to remove illegal 
access warnings (#2442)
 add dd8e540  Addendum HADOOP-17770 WASB : Support disabling buffered reads 
in positional reads - Added the invalid SpotBugs warning to 
findbugs-exclude.xml (#3223)
 add 2f2f822  HDFS-12920. HDFS default value change (with adding time unit) 
breaks old version MR tarball work with new version (3.0) of hadoop. (#3227)
 add b7431c3  [UI2] Bump http-proxy to 1.18.1 (#2891)
 add 5d76549  HDFS-16131. Show storage type for failed volumes on namenode 
web (#3211). Contributed by  tomscut.
 add d710ec8  HDFS-16140. TestBootstrapAliasmap fails by BindException. 
(#3229)
 new 904c6ec  HADOOP-11630. Allow hadoop.sh to bind to ipv6 conditionally. 
Contributed by Elliott Clark.
 new 36b8ed1  HADOOP-12430. Fix HDFS client gets errors trying to to 
connect to IPv6 DataNode. Contributed by Nate Edel.

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (1f1c38b)
\
 N -- N -- N   refs/heads/HADOOP-17800 (36b8ed1)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../src/main/resources/checkstyle/checkstyle.xml   |   4 +-
 .../main/java/org/apache/hadoop/ipc/Client.java|   8 +-
 .../main/java/org/apache/hadoop/net/NetUtils.java  | 160 +++--
 .../org/apache/hadoop/security/SecurityUtil.java   |   7 +-
 .../java/org/apache/hadoop/net/TestNetUtils.java   |   8 +-
 .../hadoop/registry/server/dns/RegistryDNS.java|  77 --
 .../hadoop/registry/server/dns/SecureableZone.java |   3 +-
 .../registry/server/dns/TestRegistryDNS.java   | 118 +++
 .../apache/hadoop/hdfs/protocol/DatanodeID.java|  14 +-
 .../datatransfer/sasl/DataTransferSaslUtil.java|   9 +-
 .../hdfs/rbfbalance/RouterDistCpProcedure.java |   1 +
 .../hdfs/rbfbalance/TestRouterDistCpProcedure.java | 120 
 .../hadoop/hdfs/server/datanode/DataNode.java  |   2 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java |   4 +-
 .../src/main/resources/hdfs-default.xml|  22 +--
 .../server/datanode/TestDataNodeVolumeFailure.java |   4 +
 .../TestDataNodeVolumeFailureReporting.java|   5 +-
 .../server/namenode/ha/TestBootstrapAliasmap.java  |   3 +
 hadoop-project/pom.xml |   4 +-
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java|  16 ++-
 .../java/org/apache/hadoop/fs/s3a/TestInvoker.java |  36 +
 .../hadoop-azure/dev-support/findbugs-exclude.xml  |  13 ++
 .../hadoop/tools/fedbalance/DistCpProcedure.java   |   4 +-
 .../tools/fedbalance/TestDistCpProcedure.java  |   6 +-
 .../protocolrecords/GetApplicationsResponse.java   |  12 ++
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |   9 ++
 .../yarn/conf/TestYarnConfigurationFields.java |   2 +
 .../org/apache/hadoop/yarn/webapp/WebApps.java |  31 ++--
 .../sc

[hadoop] branch HADOOP-17800 updated (1dd03cc -> 1f1c38b)

2021-07-20 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 1dd03cc  HADOOP-17028. ViewFS should initialize mounted target 
filesystems lazily. Contributed by Abhishek Das (#2260)
 add 87e  HADOOP-17672.Remove an invalid comment content in the 
FileContext class. (#2961)
 add df44178  HADOOP-17795. Provide fallbacks for callqueue.impl and 
scheduler.impl (#3192)
 add 632f64c  YARN-10456. RM PartitionQueueMetrics records are named 
QueueMetrics in Simon metrics registry. Contributed by Eric Payne.
 add 4bb25c8  HDFS-15650. Make the socket timeout for computing checksum of 
striped blocks configurable (#2414)
 add d0ee065  HADOOP-16272. Upgrade HikariCP to 4.0.3 (#3204)
 add f6f105c  HADOOP-17803. Remove WARN logging from LoggingAuditor when 
executing a request outside an audit span (#3207)
 add 997d749  HADOOP-17801. No error message reported when bucket doesn't 
exist in S3AFS (#3202)
 add 4700271  HDFS-16127. Improper pipeline close recovery causes a 
permanent write failure or data loss. Contributed by Kihwal Lee.
 add 6ed7670  HDFS-16067. Support Append API in NNThroughputBenchmark. 
Contributed by Renukaprasad C.
 add 0ac443b  YARN-10855. yarn logs cli fails to retrieve logs if any TFile 
is corrupt or empty. Contributed by Jim Brennan.
 add 17bf2fc  YARN-10858. [UI2] YARN-10826 breaks Queue view. (#3213)
 add e1d00ad  HADOOP-16290. Enable RpcMetrics units to be configurable 
(#3198)
 new 1f1c38b  HADOOP-11630. Allow hadoop.sh to bind to ipv6 conditionally. 
Contributed by Elliott Clark.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 LICENSE-binary |  2 +-
 .../hadoop-common/src/main/bin/hadoop-functions.sh |  7 +-
 .../apache/hadoop/fs/CommonConfigurationKeys.java  |  4 +-
 .../java/org/apache/hadoop/fs/FileContext.java |  5 --
 .../org/apache/hadoop/ipc/DecayRpcScheduler.java   |  8 +-
 .../java/org/apache/hadoop/ipc/RpcScheduler.java   |  8 +-
 .../main/java/org/apache/hadoop/ipc/Server.java| 93 +++---
 .../org/apache/hadoop/ipc/metrics/RpcMetrics.java  | 38 +++--
 .../src/main/resources/core-default.xml| 48 +++
 .../src/site/markdown/Benchmarking.md  |  1 +
 .../hadoop-common/src/site/markdown/Metrics.md |  2 +
 .../hadoop/conf/TestCommonConfigurationFields.java |  2 +
 .../apache/hadoop/ipc/TestCallQueueManager.java|  7 +-
 .../test/java/org/apache/hadoop/ipc/TestRPC.java   | 69 +++-
 .../java/org/apache/hadoop/hdfs/DataStreamer.java  | 14 +++-
 .../org/apache/hadoop/hdfs/FileChecksumHelper.java |  3 +-
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |  2 +
 .../hadoop/hdfs/client/impl/DfsClientConf.java | 12 +++
 .../hdfs/server/datanode/BlockChecksumHelper.java  |  2 +-
 .../apache/hadoop/hdfs/server/datanode/DNConf.java | 15 
 .../src/main/resources/hdfs-default.xml| 10 +++
 .../server/namenode/NNThroughputBenchmark.java | 52 
 .../server/namenode/TestNNThroughputBenchmark.java | 46 +++
 hadoop-project/pom.xml |  4 +-
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java|  6 +-
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java|  2 +-
 .../hadoop/fs/s3a/UnknownStoreException.java   | 24 +++---
 .../hadoop/fs/s3a/audit/impl/LoggingAuditor.java   |  3 -
 .../hadoop/fs/s3a/TestS3AExceptionTranslation.java |  8 +-
 .../apache/hadoop/yarn/client/cli/TestLogsCLI.java | 30 +++
 .../yarn/logaggregation/AggregatedLogFormat.java   |  2 +-
 .../tfile/LogAggregationTFileController.java   | 33 +++-
 .../hadoop-yarn-server-common/pom.xml  |  2 +-
 .../resourcemanager/scheduler/QueueMetrics.java|  6 +-
 .../scheduler/TestPartitionQueueMetrics.java   |  8 ++
 .../hadoop-yarn-ui/src/main/webapp/bower.json  |  2 +-
 36 files changed, 497 insertions(+), 83 deletions(-)

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/01: HADOOP-11630. Allow hadoop.sh to bind to ipv6 conditionally. Contributed by Elliott Clark.

2021-07-20 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 1f1c38bbe08459a78f35299aebf87b9d08b4032f
Author: Brahma Reddy Battula 
AuthorDate: Tue Jul 20 19:39:42 2021 +0530

HADOOP-11630. Allow hadoop.sh to bind to ipv6 conditionally. Contributed by 
Elliott Clark.
---
 .../hadoop-common/src/main/bin/hadoop-functions.sh | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh 
b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
index c4c3157..fd07f59 100755
--- a/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
+++ b/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
@@ -619,7 +619,12 @@ function hadoop_bootstrap
   export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
 
   # defaults
-  export HADOOP_OPTS=${HADOOP_OPTS:-"-Djava.net.preferIPv4Stack=true"}
+  # shellcheck disable=SC2154
+  if [[ "${HADOOP_ALLOW_IPV6}" -ne "yes" ]]; then
+export HADOOP_OPTS=${HADOOP_OPTS:-"-Djava.net.preferIPv4Stack=true"}
+  else
+export HADOOP_OPTS=${HADOOP_OPTS:-""}
+  fi
   hadoop_debug "Initial HADOOP_OPTS=${HADOOP_OPTS}"
 }
 

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HADOOP-17800 created (now 1dd03cc)

2021-07-13 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 1dd03cc  HADOOP-17028. ViewFS should initialize mounted target 
filesystems lazily. Contributed by Abhishek Das (#2260)

No new revisions were added by this update.

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17617. Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm file. Contributed by Ravuri Sushma sree

2021-04-07 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ae88174  HADOOP-17617. Incorrect representation of RESPONSE for Get 
Key Version in KMS index.md.vm file. Contributed by  Ravuri Sushma sree
ae88174 is described below

commit ae88174c29ae02b6cf48785ecb3432a2698944bb
Author: Brahma Reddy Battula 
AuthorDate: Wed Apr 7 23:49:17 2021 +0530

HADOOP-17617. Incorrect representation of RESPONSE for Get Key Version in 
KMS index.md.vm file. Contributed by  Ravuri Sushma sree
---
 hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm 
b/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
index 95e926b..d7599de 100644
--- a/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
+++ b/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
@@ -1055,7 +1055,8 @@ $H4 Get Key Version
 Content-Type: application/json
 
 {
-  "name": "versionName",
+  "name": "",
+  "versionName" : "",
   "material": "",//base64
 }
 
@@ -1072,11 +1073,13 @@ $H4 Get Key Versions
 
 [
   {
-"name": "versionName",
+"name": "",
+"versionName" : "",
 "material": "",//base64
   },
   {
-"name": "versionName",
+"name": "",
+"versionName" : "",
 "material": "",//base64
   },
   ...

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17617. Incorrect representation of RESPONSE for Get Key Version in KMS index.md.vm file. Contributed by Ravuri Sushma sree

2021-04-07 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 236a9a7  HADOOP-17617. Incorrect representation of RESPONSE for Get 
Key Version in KMS index.md.vm file. Contributed by  Ravuri Sushma sree
236a9a7 is described below

commit 236a9a771365bae7ed98260eff95d0dfbc9fa49e
Author: Brahma Reddy Battula 
AuthorDate: Wed Apr 7 23:49:17 2021 +0530

HADOOP-17617. Incorrect representation of RESPONSE for Get Key Version in 
KMS index.md.vm file. Contributed by  Ravuri Sushma sree

(cherry picked from commit ae88174c29ae02b6cf48785ecb3432a2698944bb)
---
 hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm 
b/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
index 95e926b..d7599de 100644
--- a/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
+++ b/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
@@ -1055,7 +1055,8 @@ $H4 Get Key Version
 Content-Type: application/json
 
 {
-  "name": "versionName",
+  "name": "",
+  "versionName" : "",
   "material": "",//base64
 }
 
@@ -1072,11 +1073,13 @@ $H4 Get Key Versions
 
 [
   {
-"name": "versionName",
+"name": "",
+"versionName" : "",
 "material": "",//base64
   },
   {
-"name": "versionName",
+"name": "",
+"versionName" : "",
 "material": "",//base64
   },
   ...

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: MAPREDUCE-7199. HsJobsBlock reuse JobACLsManager for checkAccess. Contributed by Bilwa S T

2021-04-02 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new e079aaa  MAPREDUCE-7199. HsJobsBlock reuse JobACLsManager for 
checkAccess. Contributed by Bilwa S T
e079aaa is described below

commit e079aaa8200d840c522e391b650d2b8e833ece89
Author: Surendra Singh Lilhore 
AuthorDate: Sat Apr 18 19:42:20 2020 +0530

MAPREDUCE-7199. HsJobsBlock reuse JobACLsManager for checkAccess. 
Contributed by Bilwa S T

(cherry picked from commit a1b0697d379d33223ec1a46dfef31d6d226169bb)
---
 .../org/apache/hadoop/mapred/JobACLsManager.java   |  2 +-
 .../hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java | 31 +-
 2 files changed, 7 insertions(+), 26 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobACLsManager.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobACLsManager.java
index 7373f7a..1761500 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobACLsManager.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobACLsManager.java
@@ -117,7 +117,7 @@ public class JobACLsManager {
 // Allow Job-owner for any operation on the job
 if (isMRAdmin(callerUGI)
 || user.equals(jobOwner)
-|| jobACL.isUserAllowed(callerUGI)) {
+|| (null != jobACL && jobACL.isUserAllowed(callerUGI))) {
   return true;
 }
 
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
index 3f4daf9..6a83ac2 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
@@ -23,12 +23,12 @@ import java.util.Date;
 
 import org.apache.commons.text.StringEscapeUtils;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.mapreduce.MRConfig;
+import org.apache.hadoop.mapred.JobACLsManager;
+import org.apache.hadoop.mapreduce.JobACL;
 import org.apache.hadoop.mapreduce.v2.app.AppContext;
 import org.apache.hadoop.mapreduce.v2.app.job.Job;
 import org.apache.hadoop.mapreduce.v2.hs.webapp.dao.JobInfo;
 import org.apache.hadoop.security.UserGroupInformation;
-import org.apache.hadoop.security.authorize.AccessControlList;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.util.Times;
@@ -49,8 +49,7 @@ public class HsJobsBlock extends HtmlBlock {
 new SimpleDateFormat(".MM.dd HH:mm:ss z");
   private UserGroupInformation ugi;
   private boolean isFilterAppListByUserEnabled;
-  private boolean areAclsEnabled;
-  private AccessControlList adminAclList;
+  private JobACLsManager aclsManager;
 
   @Inject
   HsJobsBlock(Configuration conf, AppContext appCtx, ViewContext ctx) {
@@ -58,8 +57,7 @@ public class HsJobsBlock extends HtmlBlock {
 appContext = appCtx;
 isFilterAppListByUserEnabled = conf
 .getBoolean(YarnConfiguration.FILTER_ENTITY_LIST_BY_USER, false);
-areAclsEnabled = conf.getBoolean(MRConfig.MR_ACLS_ENABLED, false);
-adminAclList = new AccessControlList(conf.get(MRConfig.MR_ADMINS, " "));
+aclsManager = new JobACLsManager(conf);
   }
 
   /*
@@ -94,8 +92,8 @@ public class HsJobsBlock extends HtmlBlock {
   JobInfo job = new JobInfo(j);
   ugi = getCallerUGI();
   // Allow to list only per-user apps if incoming ugi has permission.
-  if (isFilterAppListByUserEnabled && ugi != null
-  && !checkAccess(job.getUserName())) {
+  if (isFilterAppListByUserEnabled && ugi != null && !aclsManager
+  .checkAccess(ugi, JobACL.VIEW_JOB, job.getUserName(), null)) {
 continue;
   }
   jobsTableData.append("[\"")
@@ -160,21 +158,4 @@ public class HsJobsBlock extends HtmlBlock {
 __().
 __();
   }
-
-  private boolean checkAccess(String userName) {
-if(!areAclsEnabled) {
-  return true;
-}
-
-// User could see its own job.
-if (ugi.getShortUserName().equals(userName)) {
-  return true;
-}
-
-// Admin could also see all jobs
-if (adminAclList != null &&a

[hadoop] branch branch-3.3 updated: HADOOP-17587. Kinit with keytab should not display the keytab file's full path in any logs. Contributed by Ravuri Sushma sree.

2021-04-01 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 90bbaca  HADOOP-17587. Kinit with keytab should not display the keytab 
file's full path in any logs. Contributed by  Ravuri Sushma sree.
90bbaca is described below

commit 90bbaca88b28dc9b8d453cc8e5cb713d9a45a156
Author: Brahma Reddy Battula 
AuthorDate: Fri Apr 2 10:03:50 2021 +0530

HADOOP-17587. Kinit with keytab should not display the keytab file's full 
path in any logs. Contributed by  Ravuri Sushma sree.

(cherry picked from commit bc7689abf5723fb6ec763266227801636105f5a1)
---
 .../main/java/org/apache/hadoop/security/UserGroupInformation.java | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index b783f82..67d1518 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1126,9 +1126,10 @@ public class UserGroupInformation {
 
 setLoginUser(u);
 
-LOG.info("Login successful for user {} using keytab file {}. Keytab auto" +
-" renewal enabled : {}",
-user, path, isKerberosKeyTabLoginRenewalEnabled());
+LOG.info(
+"Login successful for user {} using keytab file {}. Keytab auto"
++ " renewal enabled : {}",
+user, new File(path).getName(), isKerberosKeyTabLoginRenewalEnabled());
   }
 
   /**

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/02: HADOOP-17587. Kinit with keytab should not display the keytab file's full path in any logs. Contributed by Ravuri Sushma sree.

2021-04-01 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit bc7689abf5723fb6ec763266227801636105f5a1
Author: Brahma Reddy Battula 
AuthorDate: Fri Apr 2 10:03:50 2021 +0530

HADOOP-17587. Kinit with keytab should not display the keytab file's full 
path in any logs. Contributed by  Ravuri Sushma sree.
---
 .../main/java/org/apache/hadoop/security/UserGroupInformation.java | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index cc32dae..7e90b8e 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1125,9 +1125,10 @@ public class UserGroupInformation {
 
 setLoginUser(u);
 
-LOG.info("Login successful for user {} using keytab file {}. Keytab auto" +
-" renewal enabled : {}",
-user, path, isKerberosKeyTabLoginRenewalEnabled());
+LOG.info(
+"Login successful for user {} using keytab file {}. Keytab auto"
++ " renewal enabled : {}",
+user, new File(path).getName(), isKerberosKeyTabLoginRenewalEnabled());
   }
 
   /**

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: HADOOP-17610. DelegationTokenAuthenticator prints token information. Contributed by Ravuri Sushma sree.

2021-04-01 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 478402cc740fa21123b2a332d3ac7e66170a5535
Author: Brahma Reddy Battula 
AuthorDate: Fri Apr 2 09:56:00 2021 +0530

HADOOP-17610. DelegationTokenAuthenticator prints token information. 
Contributed by  Ravuri Sushma sree.
---
 .../security/token/delegation/web/DelegationTokenAuthenticator.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
index 8546a76..19427dc 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
@@ -138,8 +138,8 @@ public abstract class DelegationTokenAuthenticator 
implements Authenticator {
   try {
 // check and renew TGT to handle potential expiration
 UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab();
-LOG.debug("No delegation token found for url={}, token={}, "
-+ "authenticating with {}", url, token, authenticator.getClass());
+LOG.debug("No delegation token found for url={}, "
++ "authenticating with {}", url, authenticator.getClass());
 authenticator.authenticate(url, token);
   } catch (IOException ex) {
 throw NetUtils.wrapException(url.getHost(), url.getPort(),

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (ed74479 -> bc7689a)

2021-04-01 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from ed74479  HDFS-15222. Correct the "hdfs fsck -list-corruptfileblocks" 
command output. Contributed by  Ravuri Sushma sree.
 new 478402c  HADOOP-17610. DelegationTokenAuthenticator prints token 
information. Contributed by  Ravuri Sushma sree.
 new bc7689a  HADOOP-17587. Kinit with keytab should not display the keytab 
file's full path in any logs. Contributed by  Ravuri Sushma sree.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../main/java/org/apache/hadoop/security/UserGroupInformation.java | 7 ---
 .../token/delegation/web/DelegationTokenAuthenticator.java | 4 ++--
 2 files changed, 6 insertions(+), 5 deletions(-)

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17610. DelegationTokenAuthenticator prints token information. Contributed by Ravuri Sushma sree.

2021-04-01 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new c60e81b  HADOOP-17610. DelegationTokenAuthenticator prints token 
information. Contributed by  Ravuri Sushma sree.
c60e81b is described below

commit c60e81b5a8802858da20781327599ec2c6556f45
Author: Brahma Reddy Battula 
AuthorDate: Fri Apr 2 09:56:00 2021 +0530

HADOOP-17610. DelegationTokenAuthenticator prints token information. 
Contributed by  Ravuri Sushma sree.

(cherry picked from commit 478402cc740fa21123b2a332d3ac7e66170a5535)
---
 .../security/token/delegation/web/DelegationTokenAuthenticator.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
index 4e2ee4f..3336c44 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
@@ -138,8 +138,8 @@ public abstract class DelegationTokenAuthenticator 
implements Authenticator {
   try {
 // check and renew TGT to handle potential expiration
 UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab();
-LOG.debug("No delegation token found for url={}, token={}, "
-+ "authenticating with {}", url, token, authenticator.getClass());
+LOG.debug("No delegation token found for url={}, "
++ "authenticating with {}", url, authenticator.getClass());
 authenticator.authenticate(url, token);
   } catch (IOException ex) {
 throw NetUtils.wrapException(url.getHost(), url.getPort(),

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15222. Correct the "hdfs fsck -list-corruptfileblocks" command output. Contributed by Ravuri Sushma sree.

2021-04-01 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ed74479  HDFS-15222. Correct the "hdfs fsck -list-corruptfileblocks" 
command output. Contributed by  Ravuri Sushma sree.
ed74479 is described below

commit ed74479ea56ba2113d40b32f28be5c963f2928fa
Author: Brahma Reddy Battula 
AuthorDate: Fri Apr 2 09:47:20 2021 +0530

HDFS-15222. Correct the "hdfs fsck -list-corruptfileblocks" command output. 
Contributed by  Ravuri Sushma sree.
---
 .../java/org/apache/hadoop/hdfs/tools/DFSck.java   |  4 +--
 .../hadoop/hdfs/TestClientReportBadBlock.java  |  2 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java  | 36 --
 3 files changed, 23 insertions(+), 19 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
index 8a2ef8b..db30133 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
@@ -227,7 +227,7 @@ public class DFSck extends Configured implements Tool {
 continue;
   numCorrupt++;
   if (numCorrupt == 1) {
-out.println("The list of corrupt files under path '"
+out.println("The list of corrupt blocks under path '"
 + dir + "' are:");
   }
   out.println(line);
@@ -237,7 +237,7 @@ public class DFSck extends Configured implements Tool {
   }
 }
 out.println("The filesystem under path '" + dir + "' has " 
-+ numCorrupt + " CORRUPT files");
++ numCorrupt + " CORRUPT blocks");
 if (numCorrupt == 0)
   errCode = 0;
 return errCode;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
index 935a639..2f5aa96 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestClientReportBadBlock.java
@@ -316,7 +316,7 @@ public class TestClientReportBadBlock {
 String outStr = runFsck(conf, errorCode, true, filePath.toString(), 
"-list-corruptfileblocks");
 LOG.info("fsck -list-corruptfileblocks out: " + outStr);
 if (errorCode != 0) {
-  Assert.assertTrue(outStr.contains("CORRUPT files"));
+  Assert.assertTrue(outStr.contains("CORRUPT blocks"));
 }
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
index ca5a870..2c9075e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
@@ -1136,7 +1136,7 @@ public class TestFsck {
 String outStr = runFsck(conf, 0, false, "/corruptData",
 "-list-corruptfileblocks");
 System.out.println("1. good fsck out: " + outStr);
-assertTrue(outStr.contains("has 0 CORRUPT files"));
+assertTrue(outStr.contains("has 0 CORRUPT blocks"));
 // delete the blocks
 final String bpid = cluster.getNamesystem().getBlockPoolId();
 for (int i=0; i<4; i++) {
@@ -1159,19 +1159,19 @@ public class TestFsck {
 waitForCorruptionBlocks(3, "/corruptData");
 outStr = runFsck(conf, -1, true, "/corruptData", 
"-list-corruptfileblocks");
 System.out.println("2. bad fsck out: " + outStr);
-assertTrue(outStr.contains("has 3 CORRUPT files"));
+assertTrue(outStr.contains("has 3 CORRUPT blocks"));
 
 // Do a listing on a dir which doesn't have any corrupt blocks and validate
 util.createFiles(fs, "/goodData");
 outStr = runFsck(conf, 0, true, "/goodData", "-list-corruptfileblocks");
 System.out.println("3. good fsck out: " + outStr);
-assertTrue(outStr.contains("has 0 CORRUPT files"));
+assertTrue(outStr.contains("has 0 CORRUPT blocks"));
 util.cleanup(fs, "/goodData");
 
 // validate if a directory have any invalid entries
 util.createFiles(fs, "/corruptDa");
 outStr = runFsck(conf, 0, true, "/corruptDa", "-list-corruptfileblocks&q

[hadoop] branch branch-3.3 updated: HDFS-15494. TestReplicaCachingGetSpaceUsed#testReplicaCachingGetSpaceUsedByRBWReplica Fails on Windows. Contributed by Ravuri Sushma sree.

2021-03-31 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 13878fc  HDFS-15494. 
TestReplicaCachingGetSpaceUsed#testReplicaCachingGetSpaceUsedByRBWReplica Fails 
on Windows. Contributed by  Ravuri Sushma sree.
13878fc is described below

commit 13878fc06b6aea34263d8eb2dc09ae8c83d3acb7
Author: Brahma Reddy Battula 
AuthorDate: Thu Apr 1 09:19:39 2021 +0530

HDFS-15494. 
TestReplicaCachingGetSpaceUsed#testReplicaCachingGetSpaceUsedByRBWReplica Fails 
on Windows. Contributed by  Ravuri Sushma sree.

(cherry picked from commit 0665ce99308aba1277d8f36bad9308062ad4b6ea)
---
 .../server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java
index 6abf523..d4382d2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java
@@ -43,6 +43,7 @@ import java.util.List;
 import java.util.Set;
 
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DU_INTERVAL_KEY;
+import static org.apache.hadoop.test.PlatformAssumptions.assumeNotWindows;
 import static org.junit.Assert.assertEquals;
 
 /**
@@ -112,6 +113,8 @@ public class TestReplicaCachingGetSpaceUsed {
 
   @Test
   public void testReplicaCachingGetSpaceUsedByRBWReplica() throws Exception {
+ // This test cannot pass on Windows
+assumeNotWindows();
 FSDataOutputStream os =
 fs.create(new Path("/testReplicaCachingGetSpaceUsedByRBWReplica"));
 byte[] bytes = new byte[20480];

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15494. TestReplicaCachingGetSpaceUsed#testReplicaCachingGetSpaceUsedByRBWReplica Fails on Windows. Contributed by Ravuri Sushma sree.

2021-03-31 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0665ce9  HDFS-15494. 
TestReplicaCachingGetSpaceUsed#testReplicaCachingGetSpaceUsedByRBWReplica Fails 
on Windows. Contributed by  Ravuri Sushma sree.
0665ce9 is described below

commit 0665ce99308aba1277d8f36bad9308062ad4b6ea
Author: Brahma Reddy Battula 
AuthorDate: Thu Apr 1 09:19:39 2021 +0530

HDFS-15494. 
TestReplicaCachingGetSpaceUsed#testReplicaCachingGetSpaceUsedByRBWReplica Fails 
on Windows. Contributed by  Ravuri Sushma sree.
---
 .../server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java
index 6abf523..d4382d2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java
@@ -43,6 +43,7 @@ import java.util.List;
 import java.util.Set;
 
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DU_INTERVAL_KEY;
+import static org.apache.hadoop.test.PlatformAssumptions.assumeNotWindows;
 import static org.junit.Assert.assertEquals;
 
 /**
@@ -112,6 +113,8 @@ public class TestReplicaCachingGetSpaceUsed {
 
   @Test
   public void testReplicaCachingGetSpaceUsedByRBWReplica() throws Exception {
+ // This test cannot pass on Windows
+assumeNotWindows();
 FSDataOutputStream os =
 fs.create(new Path("/testReplicaCachingGetSpaceUsedByRBWReplica"));
 byte[] bytes = new byte[20480];

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: MAPREDUCE-6826. Job fails with InvalidStateTransitonException: Invalid event: JOB_TASK_COMPLETED at SUCCEEDED/COMMITTING. Contributed by Bilwa S T.

2021-03-31 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new c70f5eb  MAPREDUCE-6826. Job fails with 
InvalidStateTransitonException: Invalid event: JOB_TASK_COMPLETED at 
SUCCEEDED/COMMITTING. Contributed by Bilwa S T.
c70f5eb is described below

commit c70f5eb8fa997b1e464d216516e8197d6fef8207
Author: Surendra Singh Lilhore 
AuthorDate: Tue May 19 11:06:36 2020 +0530

MAPREDUCE-6826. Job fails with InvalidStateTransitonException: Invalid 
event: JOB_TASK_COMPLETED at SUCCEEDED/COMMITTING. Contributed by Bilwa S T.

(cherry picked from commit d4e36409d40d9f0783234a3b98394962ae0da87e)
---
 .../org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java |  6 --
 .../apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java | 12 +++-
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
index 5ef1250..5489f52 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
@@ -422,7 +422,8 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
   EnumSet.of(JobEventType.JOB_UPDATED_NODES,
   JobEventType.JOB_TASK_ATTEMPT_FETCH_FAILURE,
   JobEventType.JOB_TASK_ATTEMPT_COMPLETED,
-  JobEventType.JOB_MAP_TASK_RESCHEDULED))
+  JobEventType.JOB_MAP_TASK_RESCHEDULED,
+  JobEventType.JOB_TASK_COMPLETED))
 
   // Transitions from SUCCEEDED state
   .addTransition(JobStateInternal.SUCCEEDED, 
JobStateInternal.SUCCEEDED,
@@ -441,7 +442,8 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
   JobEventType.JOB_TASK_ATTEMPT_FETCH_FAILURE,
   JobEventType.JOB_AM_REBOOT,
   JobEventType.JOB_TASK_ATTEMPT_COMPLETED,
-  JobEventType.JOB_MAP_TASK_RESCHEDULED))
+  JobEventType.JOB_MAP_TASK_RESCHEDULED,
+  JobEventType.JOB_TASK_COMPLETED))
 
   // Transitions from FAIL_WAIT state
   .addTransition(JobStateInternal.FAIL_WAIT,
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
index 43e59a7..5f378e4 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
@@ -204,7 +204,7 @@ public class TestJobImpl {
   public void testCheckJobCompleteSuccess() throws Exception {
 Configuration conf = new Configuration();
 conf.set(MRJobConfig.MR_AM_STAGING_DIR, stagingDir);
-AsyncDispatcher dispatcher = new AsyncDispatcher();
+DrainDispatcher dispatcher = new DrainDispatcher();
 dispatcher.init(conf);
 dispatcher.start();
 CyclicBarrier syncBarrier = new CyclicBarrier(2);
@@ -226,6 +226,11 @@ public class TestJobImpl {
 JobEventType.JOB_MAP_TASK_RESCHEDULED));
 assertJobState(job, JobStateInternal.COMMITTING);
 
+job.handle(new JobEvent(job.getID(),
+JobEventType.JOB_TASK_COMPLETED));
+dispatcher.await();
+assertJobState(job, JobStateInternal.COMMITTING);
+
 // let the committer complete and verify the job succeeds
 syncBarrier.await();
 assertJobState(job, JobStateInternal.SUCCEEDED);
@@ -237,6 +242,11 @@ public class TestJobImpl {
 job.handle(new JobEvent(job.getID(), 
 JobEventType.JOB_MAP_TASK_RESCHEDULED));
 assertJobState(job, JobStateInternal.SUCCEEDED);
+
+job.handle(new JobEvent(job.getID(),
+JobEventType.JOB_TASK_COMPLETED));
+dispatcher.await();
+assertJobState(job, JobStateInternal.SUCCEEDED);
 
 dispatcher.stop();
 commitHandler.stop();

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10544. AMParams.java having un-necessary access identifier static final. Contributed by ANANDA G B.

2021-03-30 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 03e42ef  YARN-10544. AMParams.java having un-necessary access 
identifier static final. Contributed by ANANDA G B.
03e42ef is described below

commit 03e42efa30bc084f6d9e45822c25ec87ead78e15
Author: Brahma Reddy Battula 
AuthorDate: Wed Mar 31 08:25:20 2021 +0530

YARN-10544. AMParams.java having un-necessary access identifier static 
final. Contributed by ANANDA G B.
---
 .../hadoop/mapreduce/v2/app/webapp/AMParams.java   | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AMParams.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AMParams.java
index 2ca7ff5..4bbd1da 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AMParams.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AMParams.java
@@ -22,13 +22,13 @@ package org.apache.hadoop.mapreduce.v2.app.webapp;
  * Params constants for the AM webapp and the history webapp.
  */
 public interface AMParams {
-  static final String RM_WEB = "rm.web";
-  static final String APP_ID = "app.id";
-  static final String JOB_ID = "job.id";
-  static final String TASK_ID = "task.id";
-  static final String TASK_TYPE = "task.type";
-  static final String TASK_STATE = "task.state";
-  static final String ATTEMPT_STATE = "attempt.state";
-  static final String COUNTER_GROUP = "counter.group";
-  static final String COUNTER_NAME = "counter.name";
+  String RM_WEB = "rm.web";
+  String APP_ID = "app.id";
+  String JOB_ID = "job.id";
+  String TASK_ID = "task.id";
+  String TASK_TYPE = "task.type";
+  String TASK_STATE = "task.state";
+  String ATTEMPT_STATE = "attempt.state";
+  String COUNTER_GROUP = "counter.group";
+  String COUNTER_NAME = "counter.name";
 }

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10466.Fix NullPointerException in yarn-services Component.java. Contributed by D M Murali Krishna Reddy

2021-03-30 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 413a4c3  YARN-10466.Fix NullPointerException in yarn-services 
Component.java. Contributed by  D M Murali Krishna Reddy
413a4c3 is described below

commit 413a4c3c05d317090c706385c51e4cabcfd92b0e
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 30 13:52:07 2021 +0530

YARN-10466.Fix NullPointerException in yarn-services Component.java. 
Contributed by  D M Murali Krishna Reddy
---
 .../java/org/apache/hadoop/yarn/service/component/Component.java | 5 +
 1 file changed, 5 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
index 0b0ba79..0e031f4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
@@ -830,6 +830,11 @@ public class Component implements 
EventHandler {
 targetExpressions.toArray(new TargetExpression[0])).build();
 break;
   }
+  if (constraint == null) {
+LOG.info("[COMPONENT {}] Placement constraint: null ",
+componentSpec.getName());
+continue;
+  }
   // The default AND-ed final composite constraint
   if (finalConstraint != null) {
 finalConstraint = PlacementConstraints

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-10466.Fix NullPointerException in yarn-services Component.java. Contributed by D M Murali Krishna Reddy

2021-03-30 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 5358313  YARN-10466.Fix NullPointerException in yarn-services 
Component.java. Contributed by  D M Murali Krishna Reddy
5358313 is described below

commit 5358313f97d25e6e950873630c811b1cbde73f12
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 30 13:52:07 2021 +0530

YARN-10466.Fix NullPointerException in yarn-services Component.java. 
Contributed by  D M Murali Krishna Reddy

(cherry picked from commit 413a4c3c05d317090c706385c51e4cabcfd92b0e)
---
 .../java/org/apache/hadoop/yarn/service/component/Component.java | 5 +
 1 file changed, 5 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
index 1f3ca22..0472977 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/Component.java
@@ -833,6 +833,11 @@ public class Component implements 
EventHandler {
 targetExpressions.toArray(new TargetExpression[0])).build();
 break;
   }
+  if (constraint == null) {
+LOG.info("[COMPONENT {}] Placement constraint: null ",
+componentSpec.getName());
+continue;
+  }
   // The default AND-ed final composite constraint
   if (finalConstraint != null) {
 finalConstraint = PlacementConstraints

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10439. addendum fix for shaded guva.

2021-03-30 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 6577bf1  YARN-10439. addendum fix for shaded guva.
6577bf1 is described below

commit 6577bf1891b11c9271d73491b311059677dfb376
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 30 13:48:40 2021 +0530

YARN-10439. addendum fix for shaded guva.
---
 .../src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
index 342d8d8..a06a0e6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
@@ -18,7 +18,7 @@
 
 package org.apache.hadoop.yarn.service;
 
-import com.google.common.annotations.VisibleForTesting;
+import 
org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.ipc.Server;

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-10439. addendum fix for shaded guva.

2021-03-30 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new bac1326  YARN-10439. addendum fix for shaded guva.
bac1326 is described below

commit bac1326e4e1252adbfc19cf50ed966453a3a13ec
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 30 13:48:40 2021 +0530

YARN-10439. addendum fix for shaded guva.

(cherry picked from commit 6577bf1891b11c9271d73491b311059677dfb376)
---
 .../src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
index 342d8d8..a06a0e6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
@@ -18,7 +18,7 @@
 
 package org.apache.hadoop.yarn.service;
 
-import com.google.common.annotations.VisibleForTesting;
+import 
org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.ipc.Server;

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10441. Add support for hadoop.http.rmwebapp.scheduler.page.class. Contributed by D M Murali Krishna Reddy

2021-03-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b61f52e  YARN-10441. Add support for 
hadoop.http.rmwebapp.scheduler.page.class. Contributed by  D M Murali Krishna 
Reddy
b61f52e is described below

commit b61f52ec565b84306ec8d9e0b53f5d0390e1b597
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 30 09:51:47 2021 +0530

YARN-10441. Add support for hadoop.http.rmwebapp.scheduler.page.class. 
Contributed by  D M Murali Krishna Reddy
---
 .../java/org/apache/hadoop/yarn/conf/YarnConfiguration.java| 10 ++
 .../apache/hadoop/yarn/conf/TestYarnConfigurationFields.java   |  3 +++
 2 files changed, 13 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 1888ffb..2cf4a3b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -120,6 +120,8 @@ public class YarnConfiguration extends Configuration {
 CommonConfigurationKeys.ZK_TIMEOUT_MS),
 new DeprecationDelta(RM_ZK_RETRY_INTERVAL_MS,
 CommonConfigurationKeys.ZK_RETRY_INTERVAL_MS),
+new DeprecationDelta(HADOOP_HTTP_WEBAPP_SCHEDULER_PAGE,
+YARN_HTTP_WEBAPP_SCHEDULER_PAGE)
 });
 Configuration.addDeprecations(new DeprecationDelta[] {
 new DeprecationDelta("yarn.resourcemanager.display.per-user-apps",
@@ -2487,6 +2489,14 @@ public class YarnConfiguration extends Configuration {
   public static final String YARN_HTTP_WEBAPP_EXTERNAL_CLASSES =
   "yarn.http.rmwebapp.external.classes";
 
+  /**
+   * @deprecated This field is deprecated for
+   * {@link #YARN_HTTP_WEBAPP_SCHEDULER_PAGE}
+   */
+  @Deprecated
+  public static final String HADOOP_HTTP_WEBAPP_SCHEDULER_PAGE =
+  "hadoop.http.rmwebapp.scheduler.page.class";
+
   public static final String YARN_HTTP_WEBAPP_SCHEDULER_PAGE =
   "yarn.http.rmwebapp.scheduler.page.class";
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
index 9fda809..3dcd5cc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
@@ -157,6 +157,9 @@ public class TestYarnConfigurationFields extends 
TestConfigurationFieldsBase {
 configurationPropsToSkipCompare
 .add(YarnConfiguration.DEFAULT_RM_RESOURCE_PROFILES_SOURCE_FILE);
 
+configurationPropsToSkipCompare
+.add(YarnConfiguration.HADOOP_HTTP_WEBAPP_SCHEDULER_PAGE);
+
 // Ignore NodeManager "work in progress" variables
 configurationPrefixToSkipCompare
 .add(YarnConfiguration.NM_NETWORK_RESOURCE_ENABLED);

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-10441. Add support for hadoop.http.rmwebapp.scheduler.page.class. Contributed by D M Murali Krishna Reddy

2021-03-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 616a41e  YARN-10441. Add support for 
hadoop.http.rmwebapp.scheduler.page.class. Contributed by  D M Murali Krishna 
Reddy
616a41e is described below

commit 616a41ee322303d2f9d274c08d1fe672a08a5241
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 30 09:51:47 2021 +0530

YARN-10441. Add support for hadoop.http.rmwebapp.scheduler.page.class. 
Contributed by  D M Murali Krishna Reddy

(cherry picked from commit b61f52ec565b84306ec8d9e0b53f5d0390e1b597)
---
 .../java/org/apache/hadoop/yarn/conf/YarnConfiguration.java| 10 ++
 .../apache/hadoop/yarn/conf/TestYarnConfigurationFields.java   |  3 +++
 2 files changed, 13 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index f560f73..568c4e9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -124,6 +124,8 @@ public class YarnConfiguration extends Configuration {
 CommonConfigurationKeys.ZK_TIMEOUT_MS),
 new DeprecationDelta(RM_ZK_RETRY_INTERVAL_MS,
 CommonConfigurationKeys.ZK_RETRY_INTERVAL_MS),
+new DeprecationDelta(HADOOP_HTTP_WEBAPP_SCHEDULER_PAGE,
+YARN_HTTP_WEBAPP_SCHEDULER_PAGE)
 });
 Configuration.addDeprecations(new DeprecationDelta[] {
 new DeprecationDelta("yarn.resourcemanager.display.per-user-apps",
@@ -2480,6 +2482,14 @@ public class YarnConfiguration extends Configuration {
   public static final String YARN_HTTP_WEBAPP_EXTERNAL_CLASSES =
   "yarn.http.rmwebapp.external.classes";
 
+  /**
+   * @deprecated This field is deprecated for
+   * {@link #YARN_HTTP_WEBAPP_SCHEDULER_PAGE}
+   */
+  @Deprecated
+  public static final String HADOOP_HTTP_WEBAPP_SCHEDULER_PAGE =
+  "hadoop.http.rmwebapp.scheduler.page.class";
+
   public static final String YARN_HTTP_WEBAPP_SCHEDULER_PAGE =
   "yarn.http.rmwebapp.scheduler.page.class";
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
index 6f781fa..5c934d8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
@@ -157,6 +157,9 @@ public class TestYarnConfigurationFields extends 
TestConfigurationFieldsBase {
 configurationPropsToSkipCompare
 .add(YarnConfiguration.DEFAULT_RM_RESOURCE_PROFILES_SOURCE_FILE);
 
+configurationPropsToSkipCompare
+.add(YarnConfiguration.HADOOP_HTTP_WEBAPP_SCHEDULER_PAGE);
+
 // Ignore NodeManager "work in progress" variables
 configurationPrefixToSkipCompare
 .add(YarnConfiguration.NM_NETWORK_RESOURCE_ENABLED);

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-10439. Yarn Service AM listens on all IP's on the machine. Contributed by D M Murali Krishna Reddy

2021-03-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 5181b20  YARN-10439. Yarn Service AM listens on all IP's on the 
machine. Contributed by  D M Murali Krishna Reddy
5181b20 is described below

commit 5181b2004b7b46cfc2b3e4d4c1ce9803b2714198
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 30 09:46:12 2021 +0530

YARN-10439. Yarn Service AM listens on all IP's on the machine. Contributed 
by  D M Murali Krishna Reddy

(cherry picked from commit d0dcfc405c624f73ed1af9527bbf456a10337a6d)
---
 .../apache/hadoop/yarn/service/ClientAMService.java   | 19 ++-
 .../org/apache/hadoop/yarn/service/ServiceMaster.java |  7 ++-
 .../hadoop/yarn/service/conf/YarnServiceConf.java |  2 ++
 .../org/apache/hadoop/yarn/service/MockServiceAM.java | 10 ++
 .../src/site/markdown/yarn-service/Configurations.md  |  1 +
 5 files changed, 33 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
index 72ac550..342d8d8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.yarn.service;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.ipc.Server;
@@ -53,8 +54,10 @@ import 
org.apache.hadoop.yarn.service.api.records.ComponentContainers;
 import org.apache.hadoop.yarn.service.component.ComponentEvent;
 import 
org.apache.hadoop.yarn.service.component.instance.ComponentInstanceEvent;
 import 
org.apache.hadoop.yarn.service.component.instance.ComponentInstanceEventType;
+import org.apache.hadoop.yarn.service.exceptions.BadClusterStateException;
 import org.apache.hadoop.yarn.service.utils.FilterUtils;
 import org.apache.hadoop.yarn.service.utils.ServiceApiUtil;
+import org.apache.hadoop.yarn.service.utils.ServiceUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -64,6 +67,7 @@ import java.util.List;
 
 import static 
org.apache.hadoop.yarn.service.component.ComponentEventType.DECOMMISSION_INSTANCE;
 import static org.apache.hadoop.yarn.service.component.ComponentEventType.FLEX;
+import static 
org.apache.hadoop.yarn.service.conf.YarnServiceConf.YARN_SERVICE_AM_CLIENT_PORT_RANGE;
 
 public class ClientAMService extends AbstractService
 implements ClientAMProtocol {
@@ -84,9 +88,11 @@ public class ClientAMService extends AbstractService
   @Override protected void serviceStart() throws Exception {
 Configuration conf = getConfig();
 YarnRPC rpc = YarnRPC.create(conf);
-InetSocketAddress address = new InetSocketAddress(0);
+String nodeHostString = getNMHostName();
+
+InetSocketAddress address = new InetSocketAddress(nodeHostString, 0);
 server = rpc.getServer(ClientAMProtocol.class, this, address, conf,
-context.secretManager, 1);
+context.secretManager, 1, YARN_SERVICE_AM_CLIENT_PORT_RANGE);
 
 // Enable service authorization?
 if (conf.getBoolean(
@@ -97,9 +103,6 @@ public class ClientAMService extends AbstractService
 
 server.start();
 
-String nodeHostString =
-System.getenv(ApplicationConstants.Environment.NM_HOST.name());
-
 bindAddress = NetUtils.createSocketAddrForHost(nodeHostString,
 server.getListenerAddress().getPort());
 
@@ -107,6 +110,12 @@ public class ClientAMService extends AbstractService
 super.serviceStart();
   }
 
+  @VisibleForTesting
+  String getNMHostName() throws BadClusterStateException {
+return ServiceUtils.mandatoryEnvVariable(
+ApplicationConstants.Environment.NM_HOST.name());
+  }
+
   @Override protected void serviceStop() throws Exception {
 if (server != null) {
   server.stop();
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceMaster.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceMaster.java
index 670fc21..3120fad 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications

[hadoop] branch trunk updated: YARN-10439. Yarn Service AM listens on all IP's on the machine. Contributed by D M Murali Krishna Reddy

2021-03-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d0dcfc4  YARN-10439. Yarn Service AM listens on all IP's on the 
machine. Contributed by  D M Murali Krishna Reddy
d0dcfc4 is described below

commit d0dcfc405c624f73ed1af9527bbf456a10337a6d
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 30 09:46:12 2021 +0530

YARN-10439. Yarn Service AM listens on all IP's on the machine. Contributed 
by  D M Murali Krishna Reddy
---
 .../apache/hadoop/yarn/service/ClientAMService.java   | 19 ++-
 .../org/apache/hadoop/yarn/service/ServiceMaster.java |  7 ++-
 .../hadoop/yarn/service/conf/YarnServiceConf.java |  2 ++
 .../org/apache/hadoop/yarn/service/MockServiceAM.java | 10 ++
 .../src/site/markdown/yarn-service/Configurations.md  |  1 +
 5 files changed, 33 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
index 72ac550..342d8d8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.yarn.service;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.ipc.Server;
@@ -53,8 +54,10 @@ import 
org.apache.hadoop.yarn.service.api.records.ComponentContainers;
 import org.apache.hadoop.yarn.service.component.ComponentEvent;
 import 
org.apache.hadoop.yarn.service.component.instance.ComponentInstanceEvent;
 import 
org.apache.hadoop.yarn.service.component.instance.ComponentInstanceEventType;
+import org.apache.hadoop.yarn.service.exceptions.BadClusterStateException;
 import org.apache.hadoop.yarn.service.utils.FilterUtils;
 import org.apache.hadoop.yarn.service.utils.ServiceApiUtil;
+import org.apache.hadoop.yarn.service.utils.ServiceUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -64,6 +67,7 @@ import java.util.List;
 
 import static 
org.apache.hadoop.yarn.service.component.ComponentEventType.DECOMMISSION_INSTANCE;
 import static org.apache.hadoop.yarn.service.component.ComponentEventType.FLEX;
+import static 
org.apache.hadoop.yarn.service.conf.YarnServiceConf.YARN_SERVICE_AM_CLIENT_PORT_RANGE;
 
 public class ClientAMService extends AbstractService
 implements ClientAMProtocol {
@@ -84,9 +88,11 @@ public class ClientAMService extends AbstractService
   @Override protected void serviceStart() throws Exception {
 Configuration conf = getConfig();
 YarnRPC rpc = YarnRPC.create(conf);
-InetSocketAddress address = new InetSocketAddress(0);
+String nodeHostString = getNMHostName();
+
+InetSocketAddress address = new InetSocketAddress(nodeHostString, 0);
 server = rpc.getServer(ClientAMProtocol.class, this, address, conf,
-context.secretManager, 1);
+context.secretManager, 1, YARN_SERVICE_AM_CLIENT_PORT_RANGE);
 
 // Enable service authorization?
 if (conf.getBoolean(
@@ -97,9 +103,6 @@ public class ClientAMService extends AbstractService
 
 server.start();
 
-String nodeHostString =
-System.getenv(ApplicationConstants.Environment.NM_HOST.name());
-
 bindAddress = NetUtils.createSocketAddrForHost(nodeHostString,
 server.getListenerAddress().getPort());
 
@@ -107,6 +110,12 @@ public class ClientAMService extends AbstractService
 super.serviceStart();
   }
 
+  @VisibleForTesting
+  String getNMHostName() throws BadClusterStateException {
+return ServiceUtils.mandatoryEnvVariable(
+ApplicationConstants.Environment.NM_HOST.name());
+  }
+
   @Override protected void serviceStop() throws Exception {
 if (server != null) {
   server.stop();
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceMaster.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceMaster.java
index 670fc21..3120fad 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn

[hadoop] branch branch-3.3 updated: YARN-10437. Destroy yarn service if any YarnException occurs during submitApp.Contributed by D M Murali Krishna Reddy

2021-03-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new e9d8f16  YARN-10437. Destroy yarn service if any YarnException occurs 
during submitApp.Contributed by  D M Murali Krishna Reddy
e9d8f16 is described below

commit e9d8f16a7022a5e2b6b8530afc06eada623e7d74
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 30 09:39:00 2021 +0530

YARN-10437. Destroy yarn service if any YarnException occurs during 
submitApp.Contributed by  D M Murali Krishna Reddy

(cherry picked from commit 2d62dced4b60938cab630321830a0510d5391338)
---
 .../hadoop/yarn/service/client/ServiceClient.java  | 16 +++--
 .../yarn/service/TestYarnNativeServices.java   | 41 ++
 2 files changed, 55 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
index 78db4b4..0ce3091 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
@@ -557,7 +557,13 @@ public class ServiceClient extends AppAdminClient 
implements SliderExitCodes,
 
 // Write the definition first and then submit - AM will read the definition
 ServiceApiUtil.createDirAndPersistApp(fs, appDir, service);
-ApplicationId appId = submitApp(service);
+ApplicationId appId;
+try {
+  appId = submitApp(service);
+} catch(YarnException e){
+  actionDestroy(serviceName);
+  throw e;
+}
 cachedAppInfo.put(serviceName, new AppInfo(appId, service
 .getKerberosPrincipal().getPrincipalName()));
 service.setId(appId.toString());
@@ -1362,7 +1368,13 @@ public class ServiceClient extends AppAdminClient 
implements SliderExitCodes,
   ServiceApiUtil.validateAndResolveService(service, fs, getConfig());
   // see if it is actually running and bail out;
   verifyNoLiveAppInRM(serviceName, "start");
-  ApplicationId appId = submitApp(service);
+  ApplicationId appId;
+  try {
+appId = submitApp(service);
+  } catch (YarnException e) {
+actionDestroy(serviceName);
+throw e;
+  }
   cachedAppInfo.put(serviceName, new AppInfo(appId, service
   .getKerberosPrincipal().getPrincipalName()));
   service.setId(appId.toString());
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
index ca1a8fa..2b717b7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
@@ -41,10 +41,12 @@ import 
org.apache.hadoop.yarn.service.api.records.PlacementConstraint;
 import org.apache.hadoop.yarn.service.api.records.PlacementPolicy;
 import org.apache.hadoop.yarn.service.api.records.PlacementScope;
 import org.apache.hadoop.yarn.service.api.records.PlacementType;
+import org.apache.hadoop.yarn.service.api.records.Resource;
 import org.apache.hadoop.yarn.service.api.records.Service;
 import org.apache.hadoop.yarn.service.api.records.ServiceState;
 import org.apache.hadoop.yarn.service.client.ServiceClient;
 import org.apache.hadoop.yarn.service.conf.YarnServiceConstants;
+import org.apache.hadoop.yarn.service.exceptions.SliderException;
 import org.apache.hadoop.yarn.service.utils.ServiceApiUtil;
 import org.apache.hadoop.yarn.service.utils.SliderFileSystem;
 import org.hamcrest.CoreMatchers;
@@ -981,4 +983,43 @@ public class TestYarnNativeServices extends 
ServiceTestUtils {
 Assert.assertEquals(ServiceState.STABLE, client.getStatus(
 exampleApp.getName()).getState());
   }
+
+  public Service createServiceWithSingleComp(int memory){
+Service service = new Service();
+service.setName("example-app");
+service.setVersion("v1");
+Compo

[hadoop] branch trunk updated: YARN-10437. Destroy yarn service if any YarnException occurs during submitApp.Contributed by D M Murali Krishna Reddy

2021-03-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2d62dce  YARN-10437. Destroy yarn service if any YarnException occurs 
during submitApp.Contributed by  D M Murali Krishna Reddy
2d62dce is described below

commit 2d62dced4b60938cab630321830a0510d5391338
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 30 09:39:00 2021 +0530

YARN-10437. Destroy yarn service if any YarnException occurs during 
submitApp.Contributed by  D M Murali Krishna Reddy
---
 .../hadoop/yarn/service/client/ServiceClient.java  | 16 +++--
 .../yarn/service/TestYarnNativeServices.java   | 41 ++
 2 files changed, 55 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
index 6108338..901b81f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
@@ -557,7 +557,13 @@ public class ServiceClient extends AppAdminClient 
implements SliderExitCodes,
 
 // Write the definition first and then submit - AM will read the definition
 ServiceApiUtil.createDirAndPersistApp(fs, appDir, service);
-ApplicationId appId = submitApp(service);
+ApplicationId appId;
+try {
+  appId = submitApp(service);
+} catch(YarnException e){
+  actionDestroy(serviceName);
+  throw e;
+}
 cachedAppInfo.put(serviceName, new AppInfo(appId, service
 .getKerberosPrincipal().getPrincipalName()));
 service.setId(appId.toString());
@@ -1362,7 +1368,13 @@ public class ServiceClient extends AppAdminClient 
implements SliderExitCodes,
   ServiceApiUtil.validateAndResolveService(service, fs, getConfig());
   // see if it is actually running and bail out;
   verifyNoLiveAppInRM(serviceName, "start");
-  ApplicationId appId = submitApp(service);
+  ApplicationId appId;
+  try {
+appId = submitApp(service);
+  } catch (YarnException e) {
+actionDestroy(serviceName);
+throw e;
+  }
   cachedAppInfo.put(serviceName, new AppInfo(appId, service
   .getKerberosPrincipal().getPrincipalName()));
   service.setId(appId.toString());
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
index 40b411e..45318b2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestYarnNativeServices.java
@@ -41,10 +41,12 @@ import 
org.apache.hadoop.yarn.service.api.records.PlacementConstraint;
 import org.apache.hadoop.yarn.service.api.records.PlacementPolicy;
 import org.apache.hadoop.yarn.service.api.records.PlacementScope;
 import org.apache.hadoop.yarn.service.api.records.PlacementType;
+import org.apache.hadoop.yarn.service.api.records.Resource;
 import org.apache.hadoop.yarn.service.api.records.Service;
 import org.apache.hadoop.yarn.service.api.records.ServiceState;
 import org.apache.hadoop.yarn.service.client.ServiceClient;
 import org.apache.hadoop.yarn.service.conf.YarnServiceConstants;
+import org.apache.hadoop.yarn.service.exceptions.SliderException;
 import org.apache.hadoop.yarn.service.utils.ServiceApiUtil;
 import org.apache.hadoop.yarn.service.utils.SliderFileSystem;
 import org.hamcrest.CoreMatchers;
@@ -982,4 +984,43 @@ public class TestYarnNativeServices extends 
ServiceTestUtils {
 Assert.assertEquals(ServiceState.STABLE, client.getStatus(
 exampleApp.getName()).getState());
   }
+
+  public Service createServiceWithSingleComp(int memory){
+Service service = new Service();
+service.setName("example-app");
+service.setVersion("v1");
+Component component = new Component();
+component.setName("sleep");

[hadoop] branch branch-3.3 updated: YARN-10671.Fix Typo in TestSchedulingRequestContainerAllocation. Contributed by D M Murali Krishna Reddy.

2021-03-09 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new f12293f  YARN-10671.Fix Typo in 
TestSchedulingRequestContainerAllocation. Contributed by  D M Murali Krishna 
Reddy.
f12293f is described below

commit f12293fba28b3c512a7b639d6e809ea13f9fcd7a
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 9 20:26:07 2021 +0530

YARN-10671.Fix Typo in TestSchedulingRequestContainerAllocation. 
Contributed by  D M Murali Krishna Reddy.

(cherry picked from commit b2a565629dba125be5b330e84c313ba26b50e80f)
---
 .../scheduler/capacity/TestSchedulingRequestContainerAllocation.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
index f963e61..a4248c5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
@@ -862,7 +862,7 @@ public class TestSchedulingRequestContainerAllocation {
 try {
   rm.start();
 
-  MockNM nm1 = rm.registerNode("192.168.0.1:1234:", 100*GB, 100);
+  MockNM nm1 = rm.registerNode("192.168.0.1:1234", 100*GB, 100);
   MockNM nm2 = rm.registerNode("192.168.0.2:1234", 100*GB, 100);
   MockNM nm3 = rm.registerNode("192.168.0.3:1234", 100*GB, 100);
   MockNM nm4 = rm.registerNode("192.168.0.4:1234", 100*GB, 100);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10671.Fix Typo in TestSchedulingRequestContainerAllocation. Contributed by D M Murali Krishna Reddy.

2021-03-09 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b2a5656  YARN-10671.Fix Typo in 
TestSchedulingRequestContainerAllocation. Contributed by  D M Murali Krishna 
Reddy.
b2a5656 is described below

commit b2a565629dba125be5b330e84c313ba26b50e80f
Author: Brahma Reddy Battula 
AuthorDate: Tue Mar 9 20:26:07 2021 +0530

YARN-10671.Fix Typo in TestSchedulingRequestContainerAllocation. 
Contributed by  D M Murali Krishna Reddy.
---
 .../scheduler/capacity/TestSchedulingRequestContainerAllocation.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
index f963e61..a4248c5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
@@ -862,7 +862,7 @@ public class TestSchedulingRequestContainerAllocation {
 try {
   rm.start();
 
-  MockNM nm1 = rm.registerNode("192.168.0.1:1234:", 100*GB, 100);
+  MockNM nm1 = rm.registerNode("192.168.0.1:1234", 100*GB, 100);
   MockNM nm2 = rm.registerNode("192.168.0.2:1234", 100*GB, 100);
   MockNM nm3 = rm.registerNode("192.168.0.3:1234", 100*GB, 100);
   MockNM nm4 = rm.registerNode("192.168.0.4:1234", 100*GB, 100);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-9017. PlacementRule order is not maintained in CS. Contributed by Bilwa S T.

2021-02-18 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 8c8ef2f  YARN-9017. PlacementRule order is not maintained in CS. 
Contributed by Bilwa S T.
8c8ef2f is described below

commit 8c8ef2f444ec6f7608e3fabff5f1da87f1736d2d
Author: Inigo Goiri 
AuthorDate: Wed May 6 13:22:54 2020 -0700

YARN-9017. PlacementRule order is not maintained in CS. Contributed by 
Bilwa S T.

(cherry picked from commit 35010120fbbcad8618f99abf7130e53f98879a33)
---
 .../scheduler/capacity/CapacityScheduler.java  |  7 +++-
 .../capacity/CapacitySchedulerConfigValidator.java |  4 +-
 .../placement/TestPlacementManager.java| 49 +++---
 3 files changed, 50 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index a95cca2..890334f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -703,8 +703,11 @@ public class CapacityScheduler extends
 Set distinguishRuleSet = CapacitySchedulerConfigValidator
 .validatePlacementRules(placementRuleStrs);
 
-// add UserGroupMappingPlacementRule if absent
-distinguishRuleSet.add(YarnConfiguration.USER_GROUP_PLACEMENT_RULE);
+// add UserGroupMappingPlacementRule if empty,default value of
+// yarn.scheduler.queue-placement-rules is user-group
+if (distinguishRuleSet.isEmpty()) {
+  distinguishRuleSet.add(YarnConfiguration.USER_GROUP_PLACEMENT_RULE);
+}
 
 placementRuleStrs = new ArrayList<>(distinguishRuleSet);
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java
index 1c598efd..c3b4df4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfigValidator.java
@@ -28,7 +28,7 @@ import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.util.Collection;
-import java.util.HashSet;
+import java.util.LinkedHashSet;
 import java.util.Set;
 
 public final class CapacitySchedulerConfigValidator {
@@ -58,7 +58,7 @@ public final class CapacitySchedulerConfigValidator {
 
   public static Set validatePlacementRules(
   Collection placementRuleStrs) throws IOException {
-Set distinguishRuleSet = new HashSet<>();
+Set distinguishRuleSet = new LinkedHashSet<>();
 // fail the case if we get duplicate placementRule add in
 for (String pls : placementRuleStrs) {
   if (!distinguishRuleSet.add(pls)) {
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementManager.java
index 083af3b..22a9125 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementManager.java
@@ -18,7 +18,6 @@
 
 package org.apache.hadoop.yarn.server.resourcemanager.placement;
 
-import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
 import org.apache.hadoop.yarn.conf.YarnConf

[hadoop] branch branch-3.3 updated: YARN-8942. PriorityBasedRouterPolicy throws exception if all sub-cluster weights have negative value. Contributed by Bilwa S T.

2021-02-18 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 97171b9  YARN-8942. PriorityBasedRouterPolicy throws exception if all 
sub-cluster weights have negative value. Contributed by Bilwa S T.
97171b9 is described below

commit 97171b9b1833f45e123eb100362a5518c5112b6e
Author: Inigo Goiri 
AuthorDate: Wed May 13 10:04:12 2020 -0700

YARN-8942. PriorityBasedRouterPolicy throws exception if all sub-cluster 
weights have negative value. Contributed by Bilwa S T.

(cherry picked from commit 108ecf992f0004dd64a7143d1c400de1361b13f3)
---
 .../policies/router/PriorityRouterPolicy.java  |  5 
 .../policies/router/TestPriorityRouterPolicy.java  | 29 ++
 2 files changed, 34 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/router/PriorityRouterPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/router/PriorityRouterPolicy.java
index a1f7666..b81ca07 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/router/PriorityRouterPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/router/PriorityRouterPolicy.java
@@ -24,6 +24,7 @@ import java.util.Map;
 import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
 import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.apache.hadoop.yarn.server.federation.policies.FederationPolicyUtils;
+import 
org.apache.hadoop.yarn.server.federation.policies.exceptions.FederationPolicyException;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterIdInfo;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
@@ -65,6 +66,10 @@ public class PriorityRouterPolicy extends 
AbstractRouterPolicy {
 chosen = id;
   }
 }
+if (chosen == null) {
+  throw new FederationPolicyException(
+  "No Active Subcluster with weight vector greater than zero");
+}
 
 return chosen;
   }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/policies/router/TestPriorityRouterPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/policies/router/TestPriorityRouterPolicy.java
index 3c036c1..e1799d3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/policies/router/TestPriorityRouterPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/policies/router/TestPriorityRouterPolicy.java
@@ -16,6 +16,7 @@
  */
 package org.apache.hadoop.yarn.server.federation.policies.router;
 
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
@@ -24,6 +25,7 @@ import java.util.Map;
 
 import org.apache.hadoop.yarn.exceptions.YarnException;
 import 
org.apache.hadoop.yarn.server.federation.policies.dao.WeightedPolicyInfo;
+import 
org.apache.hadoop.yarn.server.federation.policies.exceptions.FederationPolicyException;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterIdInfo;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
@@ -82,4 +84,31 @@ public class TestPriorityRouterPolicy extends 
BaseRouterPoliciesTest {
 Assert.assertEquals("sc5", chosen.getId());
   }
 
+  @Test
+  public void testZeroSubClustersWithPositiveWeight() throws Exception {
+Map routerWeights = new HashMap<>();
+Map amrmWeights = new HashMap<>();
+// Set negative value to all subclusters
+for (int i = 0; i < 5; i++) {
+  SubClusterIdInfo sc = new SubClusterIdInfo("sc" + i);
+
+  SubClusterInfo sci = mock(SubClusterInfo.class);
+  when(sci.getState()).thenReturn(SubClusterState.SC_RUNNING);
+  when(sci.getSubClusterId()).thenReturn(sc.toId());
+  getActiveSubclusters().put(sc.toId(), sci);
+  routerWeights.put(sc, 0.0f);
+  amrmWeights.put(sc, -1.0f);
+}
+getPolicyInfo().setRouterPolicyWeights(routerWeights);
+get

[hadoop] 02/02: YARN-9301. Too many InvalidStateTransitionException with SLS. Contributed by Bilwa S T.

2021-02-18 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5cb9657320ecbca653a081eeb419a73b2d6f85a2
Author: Inigo Goiri 
AuthorDate: Tue May 12 08:24:34 2020 -0700

YARN-9301. Too many InvalidStateTransitionException with SLS. Contributed 
by Bilwa S T.

(cherry picked from commit 96bbc3bc972619bd830b2f935c06a1585a5470c6)
---
 .../java/org/apache/hadoop/yarn/sls/resourcemanager/MockAMLauncher.java  | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/resourcemanager/MockAMLauncher.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/resourcemanager/MockAMLauncher.java
index 24c795b..37bf96a 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/resourcemanager/MockAMLauncher.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/resourcemanager/MockAMLauncher.java
@@ -29,7 +29,6 @@ import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.security.AMRMTokenIdentifier;
 import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import 
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncherEvent;
-import 
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncherEventType;
 import 
org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttempt;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEvent;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: YARN-9301. Too many InvalidStateTransitionException with SLS. Contributed by Bilwa S T.

2021-02-18 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 5be3a1dc7b58449ea8472bc3423ffac347da7f59
Author: Inigo Goiri 
AuthorDate: Tue May 12 08:20:03 2020 -0700

YARN-9301. Too many InvalidStateTransitionException with SLS. Contributed 
by Bilwa S T.

(cherry picked from commit 9cbd0cd2a9268ff2e8fed0af335e9c4f91c5f601)
---
 .../yarn/sls/resourcemanager/MockAMLauncher.java   | 61 --
 1 file changed, 32 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/resourcemanager/MockAMLauncher.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/resourcemanager/MockAMLauncher.java
index 208629a..24c795b 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/resourcemanager/MockAMLauncher.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/resourcemanager/MockAMLauncher.java
@@ -82,36 +82,39 @@ public class MockAMLauncher extends 
ApplicationMasterLauncher
   @Override
   @SuppressWarnings("unchecked")
   public void handle(AMLauncherEvent event) {
-if (AMLauncherEventType.LAUNCH == event.getType()) {
-  ApplicationId appId =
-  event.getAppAttempt().getAppAttemptId().getApplicationId();
-
-  // find AMSimulator
-  AMSimulator ams = appIdAMSim.get(appId);
-  if (ams != null) {
-try {
-  Container amContainer = event.getAppAttempt().getMasterContainer();
-
-  setupAMRMToken(event.getAppAttempt());
-
-  // Notify RMAppAttempt to change state
-  super.context.getDispatcher().getEventHandler().handle(
-  new RMAppAttemptEvent(event.getAppAttempt().getAppAttemptId(),
-  RMAppAttemptEventType.LAUNCHED));
-
-  ams.notifyAMContainerLaunched(
-  event.getAppAttempt().getMasterContainer());
-  LOG.info("Notify AM launcher launched:" + amContainer.getId());
-
-  se.getNmMap().get(amContainer.getNodeId())
-  .addNewContainer(amContainer, 1L);
-
-  return;
-} catch (Exception e) {
-  throw new YarnRuntimeException(e);
-}
+ApplicationId appId =
+event.getAppAttempt().getAppAttemptId().getApplicationId();
+// find AMSimulator
+AMSimulator ams = appIdAMSim.get(appId);
+if (ams == null) {
+  throw new YarnRuntimeException(
+  "Didn't find any AMSimulator for applicationId=" + appId);
+}
+Container amContainer = event.getAppAttempt().getMasterContainer();
+switch (event.getType()) {
+case LAUNCH:
+  try {
+setupAMRMToken(event.getAppAttempt());
+// Notify RMAppAttempt to change state
+super.context.getDispatcher().getEventHandler().handle(
+new RMAppAttemptEvent(event.getAppAttempt().getAppAttemptId(),
+RMAppAttemptEventType.LAUNCHED));
+
+ams.notifyAMContainerLaunched(
+event.getAppAttempt().getMasterContainer());
+LOG.info("Notify AM launcher launched:" + amContainer.getId());
+
+se.getNmMap().get(amContainer.getNodeId())
+.addNewContainer(amContainer, -1);
+return;
+  } catch (Exception e) {
+throw new YarnRuntimeException(e);
   }
-
+case CLEANUP:
+  se.getNmMap().get(amContainer.getNodeId())
+  .cleanupContainer(amContainer.getId());
+  break;
+default:
   throw new YarnRuntimeException(
   "Didn't find any AMSimulator for applicationId=" + appId);
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated (a1bebfd -> 5cb9657)

2021-02-18 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from a1bebfd  YARN-10359. Log container report only if list is not empty. 
Contributed by Bilwa S T.
 new 5be3a1d  YARN-9301. Too many InvalidStateTransitionException with SLS. 
Contributed by Bilwa S T.
 new 5cb9657  YARN-9301. Too many InvalidStateTransitionException with SLS. 
Contributed by Bilwa S T.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../yarn/sls/resourcemanager/MockAMLauncher.java   | 62 +++---
 1 file changed, 32 insertions(+), 30 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-10359. Log container report only if list is not empty. Contributed by Bilwa S T.

2021-02-18 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new a1bebfd  YARN-10359. Log container report only if list is not empty. 
Contributed by Bilwa S T.
a1bebfd is described below

commit a1bebfd85e1315bf4f4d62b1c2cdcf9150b82315
Author: bibinchundatt 
AuthorDate: Sat Aug 1 13:03:46 2020 +0530

YARN-10359. Log container report only if list is not empty. Contributed by 
Bilwa S T.

(cherry picked from commit 5323e83edfe63355ec38ffdaacc0c27d14cad31c)
---
 .../apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
index 6901559c..a98d31c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
@@ -398,7 +398,7 @@ public class NodeStatusUpdaterImpl extends AbstractService 
implements
   nodeManagerVersionId, containerReports, getRunningApplications(),
   nodeLabels, physicalResource, nodeAttributes, nodeStatus);
 
-  if (containerReports != null) {
+  if (containerReports != null && !containerReports.isEmpty()) {
 LOG.info("Registering with RM using containers :" + containerReports);
   }
   if (logAggregationEnabled) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-10361. Make custom DAO classes configurable into RMWebApp#JAXBContextResolver.

2021-02-18 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 72904c0  YARN-10361. Make custom DAO classes configurable into 
RMWebApp#JAXBContextResolver.
72904c0 is described below

commit 72904c014d9139322d8f2920fe7747fb45242e0b
Author: Prabhu Joseph 
AuthorDate: Wed Aug 5 20:51:04 2020 +0530

YARN-10361. Make custom DAO classes configurable into 
RMWebApp#JAXBContextResolver.

Contributed by Bilwa ST.

(cherry picked from commit c7e71a6c0beb2748988b339a851a129b5e57f8c4)
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |  8 ++-
 .../src/main/resources/yarn-default.xml| 18 ++-
 .../webapp/JAXBContextResolver.java| 58 --
 3 files changed, 78 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index b725222..14aa00a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2454,7 +2454,13 @@ public class YarnConfiguration extends Configuration {
   "yarn.http.rmwebapp.external.classes";
 
   public static final String YARN_HTTP_WEBAPP_SCHEDULER_PAGE =
-  "hadoop.http.rmwebapp.scheduler.page.class";
+  "yarn.http.rmwebapp.scheduler.page.class";
+
+  public static final String YARN_HTTP_WEBAPP_CUSTOM_DAO_CLASSES =
+  "yarn.http.rmwebapp.custom.dao.classes";
+
+  public static final String YARN_HTTP_WEBAPP_CUSTOM_UNWRAPPED_DAO_CLASSES =
+  "yarn.http.rmwebapp.custom.unwrapped.dao.classes";
 
   /**
* Whether or not users are allowed to request that Docker containers honor
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 4c0aca9..8f0de6b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -3444,10 +3444,26 @@
 
 Used to specify custom scheduler page
 
-hadoop.http.rmwebapp.scheduler.page.class
+yarn.http.rmwebapp.scheduler.page.class
 
   
 
+  
+
+Used to specify custom DAO classes used by custom web services.
+
+yarn.http.rmwebapp.custom.dao.classes
+
+  
+
+  
+
+Used to specify custom DAO classes used by custom web services which 
requires
+root unwrapping.
+
+yarn.http.rmwebapp.custom.unwrapped.dao.classes
+
+  
 
   
 The Node Label script to run. Script output Line starting with
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/JAXBContextResolver.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/JAXBContextResolver.java
index f6eb2ad..a31434b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/JAXBContextResolver.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/JAXBContextResolver.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.yarn.server.resourcemanager.webapp;
 
+import com.google.inject.Inject;
 import com.google.inject.Singleton;
 import com.sun.jersey.api.json.JSONConfiguration;
 import com.sun.jersey.api.json.JSONJAXBContext;
@@ -28,6 +29,10 @@ import javax.ws.rs.ext.ContextResolver;
 import javax.ws.rs.ext.Provider;
 import javax.xml.bind.JAXBContext;
 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.UserInfo;
 import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.*;
 import org.apache.hadoop.yarn.webapp.RemoteExceptionData;
@@ -36,9 +41,17 @@ import org.apache.hadoop.yarn.webapp.RemoteExceptionData;
 @Provider
 public class JAXBContextResolver implements ContextResolver {
 
+  private static final Log LOG =
+  LogFactory.getLog(JAXBContextResolver.class.getName());
+
   private final Map typesCo

[hadoop] branch branch-3.3 updated: YARN-8047. RMWebApp make external class pluggable.

2021-02-18 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 0c46ab5  YARN-8047. RMWebApp make external class pluggable.
0c46ab5 is described below

commit 0c46ab51b5058fdd3fc80a4013a54e5238d0be85
Author: Prabhu Joseph 
AuthorDate: Tue Jul 7 18:02:29 2020 +0530

YARN-8047. RMWebApp make external class pluggable.

Contributed by Bilwa S T.

(cherry picked from commit 3a4d05b850449c51a13f3a15fe0d756fdf50b4b2)
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |  6 +++
 .../src/main/resources/yarn-default.xml| 20 
 .../server/resourcemanager/webapp/RMWebApp.java| 11 +
 .../resourcemanager/webapp/RmController.java   | 53 --
 4 files changed, 87 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 8461667..b725222 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2450,6 +2450,12 @@ public class YarnConfiguration extends Configuration {
   public static final boolean DEFAULT_NM_DOCKER_ALLOW_HOST_PID_NAMESPACE =
   false;
 
+  public static final String YARN_HTTP_WEBAPP_EXTERNAL_CLASSES =
+  "yarn.http.rmwebapp.external.classes";
+
+  public static final String YARN_HTTP_WEBAPP_SCHEDULER_PAGE =
+  "hadoop.http.rmwebapp.scheduler.page.class";
+
   /**
* Whether or not users are allowed to request that Docker containers honor
* the debug deletion delay. This is useful for troubleshooting Docker
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 498b08c..4c0aca9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -3430,6 +3430,26 @@
   
 
   
+
+Used to specify custom web services for Resourcemanager. Value can be
+classnames separated by comma.
+Ex: org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices,
+org.apache.hadoop.yarn.server.resourcemanager.webapp.DummyClass
+
+yarn.http.rmwebapp.external.classes
+
+  
+
+  
+
+Used to specify custom scheduler page
+
+hadoop.http.rmwebapp.scheduler.page.class
+
+  
+
+
+  
 The Node Label script to run. Script output Line starting with
  "NODE_PARTITION:" will be considered as Node Label Partition. In case of
  multiple lines have this pattern, then last one will be considered
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebApp.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebApp.java
index 316e7ed..5075d25 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebApp.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebApp.java
@@ -55,6 +55,7 @@ public class RMWebApp extends WebApp implements YarnWebParams 
{
 bind(RMWebServices.class);
 bind(GenericExceptionHandler.class);
 bind(RMWebApp.class).toInstance(this);
+bindExternalClasses();
 
 if (rm != null) {
   bind(ResourceManager.class).toInstance(rm);
@@ -97,6 +98,16 @@ public class RMWebApp extends WebApp implements 
YarnWebParams {
   return super.getRedirectPath();
   }
 
+  private void bindExternalClasses() {
+YarnConfiguration yarnConf = new YarnConfiguration(rm.getConfig());
+Class[] externalClasses = yarnConf
+.getClasses(YarnConfiguration.YARN_HTTP_WEBAPP_EXTERNAL_CLASSES);
+for (Class c : externalClasses) {
+  bind(c);
+}
+  }
+
+
   private String buildRedirectPath() {
 // make a copy of the original configuration so not to mutate it. Also use
 // an YarnConfiguration to force loading of yarn-site.xml.
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RmControlle

[hadoop] branch trunk updated (368f2f6 -> ff59fbb)

2020-09-24 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 368f2f6  HDFS-15590. namenode fails to start when ordered snapshot 
deletion feature is disabled (#2326)
 add ff59fbb  HDFS-15025. Applying NVDIMM storage media to HDFS (#2189)

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/fs/StorageType.java |  19 +-
 .../java/org/apache/hadoop/fs/shell/Count.java |   2 +-
 .../java/org/apache/hadoop/fs/shell/TestCount.java |   4 +-
 .../apache/hadoop/hdfs/protocol/HdfsConstants.java |   5 +
 .../hadoop/hdfs/protocolPB/PBHelperClient.java |   4 +
 .../hadoop-hdfs-client/src/main/proto/hdfs.proto   |   1 +
 .../blockmanagement/BlockStoragePolicySuite.java   |   6 +
 .../server/datanode/fsdataset/FsVolumeSpi.java |   3 +
 .../datanode/fsdataset/impl/FsDatasetImpl.java |   6 +-
 .../datanode/fsdataset/impl/FsVolumeImpl.java  |   7 +-
 .../org/apache/hadoop/hdfs/tools/DFSAdmin.java |   6 +-
 .../src/main/resources/hdfs-default.xml|   6 +-
 .../src/site/markdown/ArchivalStorage.md   |  13 +-
 .../src/site/markdown/HdfsQuotaAdminGuide.md   |   6 +-
 .../hadoop-hdfs/src/site/markdown/WebHDFS.md   |   8 +
 .../apache/hadoop/hdfs/TestBlockStoragePolicy.java | 194 +
 .../hadoop/hdfs/net/TestDFSNetworkTopology.java| 132 +-
 .../hadoop/hdfs/protocolPB/TestPBHelper.java   |   8 +-
 .../hdfs/security/token/block/TestBlockToken.java  |   2 +-
 .../blockmanagement/TestBlockStatsMXBean.java  |  47 +++--
 .../blockmanagement/TestDatanodeManager.java   |   9 +-
 .../hdfs/server/datanode/SimulatedFSDataset.java   |   5 +
 .../hadoop/hdfs/server/datanode/TestDataDirs.java  |   8 +-
 .../hdfs/server/datanode/TestDirectoryScanner.java |   5 +
 .../datanode/extdataset/ExternalVolumeImpl.java|   5 +
 .../datanode/fsdataset/impl/TestFsVolumeList.java  |  26 +++
 .../impl/TestReservedSpaceCalculator.java  |  17 ++
 .../namenode/TestNamenodeStorageDirectives.java|  24 ++-
 .../sps/TestExternalStoragePolicySatisfier.java|  27 +++
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java|   4 +
 30 files changed, 436 insertions(+), 173 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-10397. SchedulerRequest should be forwarded to scheduler if custom scheduler supports placement constraints. Contributed by Bilwa S T.

2020-09-09 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new ea37a05  YARN-10397. SchedulerRequest should be forwarded to scheduler 
if custom scheduler supports placement constraints. Contributed by Bilwa S T.
ea37a05 is described below

commit ea37a05d4b9a49a44bda56fb733b7517174416d5
Author: Brahma Reddy Battula 
AuthorDate: Wed Sep 9 17:08:13 2020 +0530

YARN-10397. SchedulerRequest should be forwarded to scheduler if custom 
scheduler supports placement constraints. Contributed by Bilwa S T.

(cherry picked from commit 43572fc7f88429a9804fa5889b82a0bbd5d3d78e)
---
 .../server/resourcemanager/scheduler/AbstractYarnScheduler.java  | 9 +
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java| 8 
 .../constraint/processor/SchedulerPlacementProcessor.java| 3 +--
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 70d2714..23bfc9c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -884,6 +884,15 @@ public abstract class AbstractYarnScheduler
 + " does not support reservations");
   }
 
+  /**
+   * By default placement constraint is disabled. Schedulers which support
+   * placement constraint can override this value.
+   * @return enabled or not
+   */
+  public boolean placementConstraintEnabled() {
+return false;
+  }
+
   protected void refreshMaximumAllocation(Resource newMaxAlloc) {
 nodeTracker.setConfiguredMaxAllocation(newMaxAlloc);
   }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 157137e..4649221 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -3285,4 +3285,12 @@ public class CapacityScheduler extends
   public void setMaxRunningAppsEnforcer(CSMaxRunningAppsEnforcer enforcer) {
 this.maxRunningEnforcer = enforcer;
   }
+
+  /**
+   * Returning true as capacity scheduler supports placement constraints.
+   */
+  @Override
+  public boolean placementConstraintEnabled() {
+return true;
+  }
 }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/SchedulerPlacementProcessor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/SchedulerPlacementProcessor.java
index 5332e34..b69a799 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/SchedulerPlacementProcessor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/SchedulerPlacementProcessor.java
@@ -22,7 +22,6 @@ import 
org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.exceptions.YarnException;
-import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -39,7 +38,7 @@ public class SchedulerPlacementProcessor extends 
AbstractPlacementProcessor {
   AllocateReque

[hadoop] branch trunk updated: YARN-10397. SchedulerRequest should be forwarded to scheduler if custom scheduler supports placement constraints. Contributed by Bilwa S T.

2020-09-09 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 43572fc  YARN-10397. SchedulerRequest should be forwarded to scheduler 
if custom scheduler supports placement constraints. Contributed by Bilwa S T.
43572fc is described below

commit 43572fc7f88429a9804fa5889b82a0bbd5d3d78e
Author: Brahma Reddy Battula 
AuthorDate: Wed Sep 9 17:08:13 2020 +0530

YARN-10397. SchedulerRequest should be forwarded to scheduler if custom 
scheduler supports placement constraints. Contributed by Bilwa S T.
---
 .../server/resourcemanager/scheduler/AbstractYarnScheduler.java  | 9 +
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java| 8 
 .../constraint/processor/SchedulerPlacementProcessor.java| 3 +--
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 43d8f3a..542b8bb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -884,6 +884,15 @@ public abstract class AbstractYarnScheduler
 + " does not support reservations");
   }
 
+  /**
+   * By default placement constraint is disabled. Schedulers which support
+   * placement constraint can override this value.
+   * @return enabled or not
+   */
+  public boolean placementConstraintEnabled() {
+return false;
+  }
+
   protected void refreshMaximumAllocation(Resource newMaxAlloc) {
 nodeTracker.setConfiguredMaxAllocation(newMaxAlloc);
   }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 699c831..2c87b33 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -3284,4 +3284,12 @@ public class CapacityScheduler extends
   public void setMaxRunningAppsEnforcer(CSMaxRunningAppsEnforcer enforcer) {
 this.maxRunningEnforcer = enforcer;
   }
+
+  /**
+   * Returning true as capacity scheduler supports placement constraints.
+   */
+  @Override
+  public boolean placementConstraintEnabled() {
+return true;
+  }
 }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/SchedulerPlacementProcessor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/SchedulerPlacementProcessor.java
index 5332e34..b69a799 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/SchedulerPlacementProcessor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/constraint/processor/SchedulerPlacementProcessor.java
@@ -22,7 +22,6 @@ import 
org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.exceptions.YarnException;
-import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -39,7 +38,7 @@ public class SchedulerPlacementProcessor extends 
AbstractPlacementProcessor {
   AllocateRequest request, AllocateResponse response) throws Yar

[hadoop] branch trunk updated: HADOOP-17220. Upgrade slf4j to 1.7.30 ( To Address: CVE-2018-8088). Contributed by Brahma Reddy Battula.

2020-08-24 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 15a0fed  HADOOP-17220. Upgrade slf4j to 1.7.30 ( To Address: 
CVE-2018-8088). Contributed by Brahma Reddy Battula.
15a0fed is described below

commit 15a0fed637129be049300eb363341f8f5365
Author: Brahma Reddy Battula 
AuthorDate: Mon Aug 24 19:03:22 2020 +0530

HADOOP-17220. Upgrade slf4j to 1.7.30 ( To Address: CVE-2018-8088). 
Contributed by Brahma Reddy Battula.
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 373450c..12ee139 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -79,7 +79,7 @@
 4.4.10
 
 
-1.7.25
+1.7.30
 1.2.17
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17220. Upgrade slf4j to 1.7.30 ( To Address: CVE-2018-8088). Contributed by Brahma Reddy Battula.

2020-08-24 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new d05051c  HADOOP-17220. Upgrade slf4j to 1.7.30 ( To Address: 
CVE-2018-8088). Contributed by Brahma Reddy Battula.
d05051c is described below

commit d05051c840e9557b06aea61a84613be6fe31daae
Author: Brahma Reddy Battula 
AuthorDate: Mon Aug 24 19:03:22 2020 +0530

HADOOP-17220. Upgrade slf4j to 1.7.30 ( To Address: CVE-2018-8088). 
Contributed by Brahma Reddy Battula.

(cherry picked from commit 15a0fed637129be049300eb363341f8f5365)
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index f8b8274..adfe8bd 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -79,7 +79,7 @@
 4.4.10
 
 
-1.7.25
+1.7.30
 1.2.17
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10229. [Federation] Client should be able to submit application to RM directly using normal client conf. Contributed by Bilwa S T.

2020-08-03 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new eac5583  YARN-10229. [Federation] Client should be able to submit 
application to RM directly using normal client conf. Contributed by Bilwa S T.
eac5583 is described below

commit eac558380fd7d3c2e78b8956e2080688bb1dd8bb
Author: Brahma Reddy Battula 
AuthorDate: Mon Aug 3 12:54:36 2020 +0530

YARN-10229. [Federation] Client should be able to submit application to RM 
directly using normal client conf. Contributed by Bilwa S T.
---
 .../nodemanager/amrmproxy/AMRMProxyService.java| 35 --
 .../amrmproxy/TestAMRMProxyService.java| 21 +
 2 files changed, 53 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyService.java
index d3c4a1d..fe278f3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyService.java
@@ -108,6 +108,8 @@ public class AMRMProxyService extends CompositeService 
implements
   private Map applPipelineMap;
   private RegistryOperations registry;
   private AMRMProxyMetrics metrics;
+  private FederationStateStoreFacade federationFacade;
+  private boolean federationEnabled = false;
 
   /**
* Creates an instance of the service.
@@ -144,7 +146,10 @@ public class AMRMProxyService extends CompositeService 
implements
   RegistryOperations.class);
   addService(this.registry);
 }
-
+this.federationFacade = FederationStateStoreFacade.getInstance();
+this.federationEnabled =
+conf.getBoolean(YarnConfiguration.FEDERATION_ENABLED,
+YarnConfiguration.DEFAULT_FEDERATION_ENABLED);
 super.serviceInit(conf);
   }
 
@@ -389,13 +394,22 @@ public class AMRMProxyService extends CompositeService 
implements
   throws IOException, YarnException {
 long startTime = clock.getTime();
 try {
-  LOG.info("Callback received for initializing request "
-  + "processing pipeline for an AM");
   ContainerTokenIdentifier containerTokenIdentifierForKey =
   
BuilderUtils.newContainerTokenIdentifier(request.getContainerToken());
   ApplicationAttemptId appAttemptId =
   containerTokenIdentifierForKey.getContainerID()
   .getApplicationAttemptId();
+  ApplicationId applicationID = appAttemptId.getApplicationId();
+  // Checking if application is there in federation state store only
+  // if federation is enabled. If
+  // application is submitted to router then it adds it in statestore.
+  // if application is not found in statestore that means its
+  // submitted to RM
+  if (!checkIfAppExistsInStateStore(applicationID)) {
+return;
+  }
+  LOG.info("Callback received for initializing request "
+  + "processing pipeline for an AM");
   Credentials credentials = YarnServerSecurityUtils
   .parseCredentials(request.getContainerLaunchContext());
 
@@ -772,6 +786,21 @@ public class AMRMProxyService extends CompositeService 
implements
 }
   }
 
+  boolean checkIfAppExistsInStateStore(ApplicationId applicationID) {
+if (!federationEnabled) {
+  return true;
+}
+
+try {
+  // Check if app is there in state store. If app is not there then it
+  // throws Exception
+  this.federationFacade.getApplicationHomeSubCluster(applicationID);
+} catch (YarnException ex) {
+  return false;
+}
+return true;
+  }
+
   @SuppressWarnings("unchecked")
   private Token getFirstAMRMToken(
   Collection> allTokens) {
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestAMRMProxyService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestAMRMProxyService.java
index b269fa4..60e3838 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestAMRMProxyService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-no

[hadoop] branch branch-3.3 updated: YARN-10229. [Federation] Client should be able to submit application to RM directly using normal client conf. Contributed by Bilwa S T.

2020-08-03 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 643ff48  YARN-10229. [Federation] Client should be able to submit 
application to RM directly using normal client conf. Contributed by Bilwa S T.
643ff48 is described below

commit 643ff4881dba5379741c45292376fd98f0a32ba0
Author: Brahma Reddy Battula 
AuthorDate: Mon Aug 3 12:54:36 2020 +0530

YARN-10229. [Federation] Client should be able to submit application to RM 
directly using normal client conf. Contributed by Bilwa S T.

(cherry picked from commit eac558380fd7d3c2e78b8956e2080688bb1dd8bb)
---
 .../nodemanager/amrmproxy/AMRMProxyService.java| 35 --
 .../amrmproxy/TestAMRMProxyService.java| 21 +
 2 files changed, 53 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyService.java
index d3c4a1d..fe278f3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyService.java
@@ -108,6 +108,8 @@ public class AMRMProxyService extends CompositeService 
implements
   private Map applPipelineMap;
   private RegistryOperations registry;
   private AMRMProxyMetrics metrics;
+  private FederationStateStoreFacade federationFacade;
+  private boolean federationEnabled = false;
 
   /**
* Creates an instance of the service.
@@ -144,7 +146,10 @@ public class AMRMProxyService extends CompositeService 
implements
   RegistryOperations.class);
   addService(this.registry);
 }
-
+this.federationFacade = FederationStateStoreFacade.getInstance();
+this.federationEnabled =
+conf.getBoolean(YarnConfiguration.FEDERATION_ENABLED,
+YarnConfiguration.DEFAULT_FEDERATION_ENABLED);
 super.serviceInit(conf);
   }
 
@@ -389,13 +394,22 @@ public class AMRMProxyService extends CompositeService 
implements
   throws IOException, YarnException {
 long startTime = clock.getTime();
 try {
-  LOG.info("Callback received for initializing request "
-  + "processing pipeline for an AM");
   ContainerTokenIdentifier containerTokenIdentifierForKey =
   
BuilderUtils.newContainerTokenIdentifier(request.getContainerToken());
   ApplicationAttemptId appAttemptId =
   containerTokenIdentifierForKey.getContainerID()
   .getApplicationAttemptId();
+  ApplicationId applicationID = appAttemptId.getApplicationId();
+  // Checking if application is there in federation state store only
+  // if federation is enabled. If
+  // application is submitted to router then it adds it in statestore.
+  // if application is not found in statestore that means its
+  // submitted to RM
+  if (!checkIfAppExistsInStateStore(applicationID)) {
+return;
+  }
+  LOG.info("Callback received for initializing request "
+  + "processing pipeline for an AM");
   Credentials credentials = YarnServerSecurityUtils
   .parseCredentials(request.getContainerLaunchContext());
 
@@ -772,6 +786,21 @@ public class AMRMProxyService extends CompositeService 
implements
 }
   }
 
+  boolean checkIfAppExistsInStateStore(ApplicationId applicationID) {
+if (!federationEnabled) {
+  return true;
+}
+
+try {
+  // Check if app is there in state store. If app is not there then it
+  // throws Exception
+  this.federationFacade.getApplicationHomeSubCluster(applicationID);
+} catch (YarnException ex) {
+  return false;
+}
+return true;
+  }
+
   @SuppressWarnings("unchecked")
   private Token getFirstAMRMToken(
   Collection> allTokens) {
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestAMRMProxyService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestAMRMProxyService.java
index b269fa4..60e3838 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestAMRMProxyService.ja

[hadoop] annotated tag rel/release-3.3.0 created (now b8afefe)

2020-07-14 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to annotated tag rel/release-3.3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at b8afefe  (tag)
 tagging aa96f1871bfd858f9bac59cf2a81ec470da649af (commit)
 replaces remove-ozone
  by Brahma Reddy Battula
  on Wed Jul 15 09:56:19 2020 +0530

- Log -
Hadoop 3.3.0 release
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAABAgAGBQJfDoVrAAoJEDhtgO+B50aauq0P/2ZBj5qx7J1LySkPVGgBvjna
CKpuXABqz2zoRkGPeAtIPfMu3XHOj2JPKpHNZbJHgMImEoapvHYNQ8u4Et8FovQX
FYqoMS0PmEV77j60M60KAR278Q9zw9tj9FeN4B29mL7mrR4BBoqTFWgvd8zdq+Eb
htWTdAonoKiNXTr/ZxMECiI4YffWRJnLPKsHmyhUKDj2ZoYMfGpWN5ZWGo5DLKaG
/92OY4v4o+iplpB9iWAVYWe4b4hByRmfe2Kj+dETO6uS4GdRYSOIl0xLf75VXNom
xI6MWPfaYWV7KZq5cfT3IH19DZM2t4AoyjDe/hPSg3r/mbm+V/nOeGK2wOFsXBcA
MLqhdczlx+icrtcJ9G1OWl7TLiHw/QAuroLPPCXiGb3yLaXMczfXOHJAqP4cn/HG
esDCnsTtlPX2ZLBrg4Zn5DMEBHd9F4US1/Ya5Kg/qK3fo7wYUXnw+gFzDdh5z9lH
f79zQhVyg1H8DXI50el90Fhl/FtqOrt0LriQoyV5N4zOwmWyFBV2e16s+MWWolgV
p0dLnzPCxNStCh77Ygw/ox3NKkZQPY09jttvuN+75Z8PI1zUvcRY72dNsNFbfn6k
Y5GQlRCFOkLMg29gtC93ltYTVQ8trW8Qd0ixHS6NLuTJq3E+apV93P9dumACJs/k
XuvC2iUJd3bMiIjknbuR
=domw
-END PGP SIGNATURE-
---

No new revisions were added by this update.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10341. Yarn Service Container Completed event doesn't get processed. Contributed by Bilwa S T.

2020-07-09 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new dfe6039  YARN-10341. Yarn Service Container Completed event doesn't 
get processed. Contributed by Bilwa S T.
dfe6039 is described below

commit dfe60392c91be21f574c1659af22f5c381b2675a
Author: Brahma Reddy Battula 
AuthorDate: Thu Jul 9 12:34:52 2020 +0530

YARN-10341. Yarn Service Container Completed event doesn't get processed. 
Contributed by Bilwa S T.
---
 .../hadoop/yarn/service/ServiceScheduler.java  |  2 +-
 .../apache/hadoop/yarn/service/TestServiceAM.java  | 88 ++
 2 files changed, 89 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
index 458a7a1..0d77479 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
@@ -737,7 +737,7 @@ public class ServiceScheduler extends CompositeService {
   LOG.warn(
   "Container {} Completed. No component instance exists. 
exitStatus={}. diagnostics={} ",
   containerId, status.getExitStatus(), status.getDiagnostics());
-  return;
+  continue;
 }
 ComponentEvent event =
 new ComponentEvent(instance.getCompName(), CONTAINER_COMPLETED)
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceAM.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceAM.java
index bbcbee2..5b961a8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceAM.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceAM.java
@@ -22,22 +22,29 @@ import com.google.common.collect.ImmutableMap;
 import org.apache.commons.io.FileUtils;
 import org.apache.curator.test.TestingCluster;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.security.Credentials;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.TokenIdentifier;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes;
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.ContainerStatus;
 import org.apache.hadoop.yarn.api.records.ResourceTypeInfo;
 import org.apache.hadoop.yarn.client.api.AMRMClient;
 import org.apache.hadoop.yarn.client.api.async.AMRMClientAsync;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.event.AsyncDispatcher;
+import org.apache.hadoop.yarn.event.Event;
+import org.apache.hadoop.yarn.event.EventHandler;
 import org.apache.hadoop.yarn.security.DockerCredentialTokenIdentifier;
 import org.apache.hadoop.yarn.service.api.records.Artifact;
 import org.apache.hadoop.yarn.service.api.records.Component;
 import org.apache.hadoop.yarn.service.api.records.ResourceInformation;
 import org.apache.hadoop.yarn.service.api.records.Service;
+import org.apache.hadoop.yarn.service.api.records.ServiceState;
 import org.apache.hadoop.yarn.service.component.ComponentState;
 import org.apache.hadoop.yarn.service.component.instance.ComponentInstance;
 import 
org.apache.hadoop.yarn.service.component.instance.ComponentInstanceState;
@@ -47,7 +54,9 @@ import org.apache.hadoop.yarn.util.resource.ResourceUtils;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
+import org.junit.Rule;
 import org.junit.Test;
+import org.mockito.Mockito;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -63,6 +72,8 @@ import java.util.concurrent.TimeoutException;
 
 import static 
org.apache.hadoop.registry.client.api.RegistryConstants.KEY_REGISTRY_ZK_QUORUM;
 import static org.junit.Assert.assertEqual

[hadoop] branch branch-3.3 updated: YARN-10341. Yarn Service Container Completed event doesn't get processed. Contributed by Bilwa S T.

2020-07-09 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 7b17573  YARN-10341. Yarn Service Container Completed event doesn't 
get processed. Contributed by Bilwa S T.
7b17573 is described below

commit 7b175739a9dfd9f5aef2f257f7e3ca5bdc8f8f09
Author: Brahma Reddy Battula 
AuthorDate: Thu Jul 9 12:34:52 2020 +0530

YARN-10341. Yarn Service Container Completed event doesn't get processed. 
Contributed by Bilwa S T.

(cherry picked from commit dfe60392c91be21f574c1659af22f5c381b2675a)
---
 .../hadoop/yarn/service/ServiceScheduler.java  |  2 +-
 .../apache/hadoop/yarn/service/TestServiceAM.java  | 88 ++
 2 files changed, 89 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
index 458a7a1..0d77479 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ServiceScheduler.java
@@ -737,7 +737,7 @@ public class ServiceScheduler extends CompositeService {
   LOG.warn(
   "Container {} Completed. No component instance exists. 
exitStatus={}. diagnostics={} ",
   containerId, status.getExitStatus(), status.getDiagnostics());
-  return;
+  continue;
 }
 ComponentEvent event =
 new ComponentEvent(instance.getCompName(), CONTAINER_COMPLETED)
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceAM.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceAM.java
index bbcbee2..5b961a8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceAM.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceAM.java
@@ -22,22 +22,29 @@ import com.google.common.collect.ImmutableMap;
 import org.apache.commons.io.FileUtils;
 import org.apache.curator.test.TestingCluster;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.security.Credentials;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.TokenIdentifier;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.yarn.api.protocolrecords.ResourceTypes;
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.ContainerStatus;
 import org.apache.hadoop.yarn.api.records.ResourceTypeInfo;
 import org.apache.hadoop.yarn.client.api.AMRMClient;
 import org.apache.hadoop.yarn.client.api.async.AMRMClientAsync;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.event.AsyncDispatcher;
+import org.apache.hadoop.yarn.event.Event;
+import org.apache.hadoop.yarn.event.EventHandler;
 import org.apache.hadoop.yarn.security.DockerCredentialTokenIdentifier;
 import org.apache.hadoop.yarn.service.api.records.Artifact;
 import org.apache.hadoop.yarn.service.api.records.Component;
 import org.apache.hadoop.yarn.service.api.records.ResourceInformation;
 import org.apache.hadoop.yarn.service.api.records.Service;
+import org.apache.hadoop.yarn.service.api.records.ServiceState;
 import org.apache.hadoop.yarn.service.component.ComponentState;
 import org.apache.hadoop.yarn.service.component.instance.ComponentInstance;
 import 
org.apache.hadoop.yarn.service.component.instance.ComponentInstanceState;
@@ -47,7 +54,9 @@ import org.apache.hadoop.yarn.util.resource.ResourceUtils;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
+import org.junit.Rule;
 import org.junit.Test;
+import org.mockito.Mockito;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -63,6 +72,8 @@ import java.util.concurrent.TimeoutException;
 
 imp

[hadoop] branch trunk updated: YARN-10344. Sync netty versions in hadoop-yarn-csi. (#2126)

2020-07-08 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 10d2189  YARN-10344. Sync netty versions in hadoop-yarn-csi. (#2126)
10d2189 is described below

commit 10d218934c9bc143bf8578c92cdbd6df6a4d3b98
Author: Akira Ajisaka 
AuthorDate: Thu Jul 9 13:59:47 2020 +0900

YARN-10344. Sync netty versions in hadoop-yarn-csi. (#2126)
---
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml | 12 
 1 file changed, 12 insertions(+)

diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
index 3d86b6b..ac6ef0b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
@@ -66,6 +66,18 @@
 io.grpc
 grpc-netty
 ${grpc.version}
+
+
+
+io.netty
+netty-codec-http2
+
+
+
+io.netty
+netty-handler-proxy
+
+
 
 
 junit


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] annotated tag release-3.3.0-RC0 created (now 970fe5b)

2020-07-06 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to annotated tag release-3.3.0-RC0
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 970fe5b  (tag)
 tagging aa96f1871bfd858f9bac59cf2a81ec470da649af (commit)
 replaces remove-ozone
  by Brahma Reddy Battula
  on Tue Jul 7 03:01:07 2020 +0530

- Log -
Release candidate - hadoop-3.3.0-RC0
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAABAgAGBQJfA5gbAAoJEDhtgO+B50aaoS0QAI9J0dls4aOXc6N0rYinnG7e
UXZuAAQTXRVtwLSnS6BG3jkU2ocgVSo5GJCw3+MVpcIcR3/BpoY7taLZWxshXI/I
S0KpANwcz8eiHR8jFsI1C+3HMKDFdE05bZqOTMLBHes5DyWHAdTSJfnpdKMtumH5
KxVrcYCyHvAEAG7kr/3dTg/BedPU81mdmFE9tihQGBC2eAUEIojUVaOzXFCJK1oV
oacaaWYha3ryzL9E+VJCOMqZEYbzYsFV4XAhxvGGZMvuKx6MnKDWdf1xYsFi+USI
0uJhQsrAk4NjdM5Ve4nVo0r2LnCuEQ5HkCFADCc9LlRPYkkwgn3xjtGxjR8S6iiz
HF+ZrlxYNSnueNcPv7/bgIVbjLLb08rvOjZ2nwtEiA+G6Zr0o5HSA0/Yg5zwjEmw
tLpnyoRSSQXqn+RGyh2BCuYZ+3TOakZF3muscyMDy+mCibpb6pW8EZsZeg+0Bzu4
IIZbcErGJDgJGrInuUFjj0b0WbJ4vDVVA6a6ZCeCC0ySladkeXeztwCWNRNMfWxj
Li8xPf5O39h3jICEvKTE6ePTVMpKwBakiasif1qi/HD810Avhq8PU2IemrmiCXhz
s2sQ2tQNrSL4w7WjV83dJkR5jmBga/Lsk9B2FSXgO5ZLfo+5q+vn0mnVhYPgP8/O
aSkbMa/eBRo9UsZ1d/iJ
=KHBg
-END PGP SIGNATURE-
---

No new revisions were added by this update.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3.0 updated: Updated the index as per 3.3.0 release

2020-07-06 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3.0 by this push:
 new aa96f18  Updated the index as per 3.3.0 release
aa96f18 is described below

commit aa96f1871bfd858f9bac59cf2a81ec470da649af
Author: Brahma Reddy Battula 
AuthorDate: Mon Jul 6 23:24:25 2020 +0530

Updated the index as per 3.3.0 release
---
 hadoop-project/src/site/markdown/index.md.vm | 237 +--
 1 file changed, 39 insertions(+), 198 deletions(-)

diff --git a/hadoop-project/src/site/markdown/index.md.vm 
b/hadoop-project/src/site/markdown/index.md.vm
index 438145a..78d8a47 100644
--- a/hadoop-project/src/site/markdown/index.md.vm
+++ b/hadoop-project/src/site/markdown/index.md.vm
@@ -16,10 +16,7 @@ Apache Hadoop ${project.version}
 
 
 Apache Hadoop ${project.version} incorporates a number of significant
-enhancements over the previous major release line (hadoop-2.x).
-
-This release is generally available (GA), meaning that it represents a point of
-API stability and quality that we consider production-ready.
+enhancements over the previous major release line (hadoop-3.2).
 
 Overview
 
@@ -27,224 +24,68 @@ Overview
 Users are encouraged to read the full set of release notes.
 This page provides an overview of the major changes.
 
-Minimum required Java version increased from Java 7 to Java 8
---
+ARM Support
+
+This is the first release to support ARM architectures.
 
-All Hadoop JARs are now compiled targeting a runtime version of Java 8.
-Users still using Java 7 or below must upgrade to Java 8.
+Upgrade protobuf from 2.5.0 to something newer
+-
+Protobuf upgraded to 3.7.1 as protobuf-2.5.0 reached EOL.
 
-Support for erasure coding in HDFS
+Java 11 runtime support
 --
 
-Erasure coding is a method for durably storing data with significant space
-savings compared to replication. Standard encodings like Reed-Solomon (10,4)
-have a 1.4x space overhead, compared to the 3x overhead of standard HDFS
-replication.
-
-Since erasure coding imposes additional overhead during reconstruction
-and performs mostly remote reads, it has traditionally been used for
-storing colder, less frequently accessed data. Users should consider
-the network and CPU overheads of erasure coding when deploying this
-feature.
-
-More details are available in the
-[HDFS Erasure Coding](./hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html)
-documentation.
-
-YARN Timeline Service v.2

-
-We are introducing an early preview (alpha 2) of a major revision of YARN
-Timeline Service: v.2. YARN Timeline Service v.2 addresses two major
-challenges: improving scalability and reliability of Timeline Service, and
-enhancing usability by introducing flows and aggregation.
-
-YARN Timeline Service v.2 alpha 2 is provided so that users and developers
-can test it and provide feedback and suggestions for making it a ready
-replacement for Timeline Service v.1.x. It should be used only in a test
-capacity.
-
-More details are available in the
-[YARN Timeline Service 
v.2](./hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html)
-documentation.
-
-Shell script rewrite

+Java 11 runtime support is completed.
 
-The Hadoop shell scripts have been rewritten to fix many long-standing
-bugs and include some new features.  While an eye has been kept towards
-compatibility, some changes may break existing installations.
+Support impersonation for AuthenticationFilter
+-
 
-Incompatible changes are documented in the release notes, with related
-discussion on [HADOOP-9902](https://issues.apache.org/jira/browse/HADOOP-9902).
+External services or YARN service may need to call into WebHDFS or YARN REST 
API on behave of the user using web
+protocols. It would be good to support impersonation mechanism in 
AuthenticationFilter or similar extensions.
 
-More details are available in the
-[Unix Shell Guide](./hadoop-project-dist/hadoop-common/UnixShellGuide.html)
-documentation. Power users will also be pleased by the
-[Unix Shell API](./hadoop-project-dist/hadoop-common/UnixShellAPI.html)
-documentation, which describes much of the new functionality, particularly
-related to extensibility.
 
-Shaded client jars
+s3A Enhancements
 --
+Lots of enhancements to the S3A code including Delegation Token support, 
better handling of 404 caching,
+ S3guard performance, resilience improvements
 
-The `hadoop-client` Maven artifact available in 2.x releases pulls
-Hadoop's transitive dependencies onto a Hadoop application's classpath.
-This can be problematic if the versions of these transitive dependencies
-conflict with the versions used by the application

[hadoop] branch branch-3.3.0 updated: Preparing for 3.3.0 Release

2020-07-04 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3.0 by this push:
 new b064f09  Preparing for 3.3.0 Release
b064f09 is described below

commit b064f09bd687cdecbbfc8af5db487d834182049f
Author: Brahma Reddy Battula 
AuthorDate: Sat Jul 4 23:08:52 2020 +0530

Preparing for 3.3.0 Release
---
 hadoop-assemblies/pom.xml   | 4 ++--
 hadoop-build-tools/pom.xml  | 2 +-
 hadoop-client-modules/hadoop-client-api/pom.xml | 4 ++--
 hadoop-client-modules/hadoop-client-check-invariants/pom.xml| 4 ++--
 hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml   | 4 ++--
 hadoop-client-modules/hadoop-client-integration-tests/pom.xml   | 4 ++--
 hadoop-client-modules/hadoop-client-minicluster/pom.xml | 4 ++--
 hadoop-client-modules/hadoop-client-runtime/pom.xml | 4 ++--
 hadoop-client-modules/hadoop-client/pom.xml | 4 ++--
 hadoop-client-modules/pom.xml   | 2 +-
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml   | 4 ++--
 hadoop-cloud-storage-project/hadoop-cos/pom.xml | 2 +-
 hadoop-cloud-storage-project/pom.xml| 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml| 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml   | 4 ++--
 hadoop-common-project/hadoop-common/pom.xml | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml| 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml| 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml| 4 ++--
 hadoop-common-project/hadoop-registry/pom.xml   | 4 ++--
 hadoop-common-project/pom.xml   | 4 ++--
 hadoop-dist/pom.xml | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml | 4 ++--
 hadoop-hdfs-project/pom.xml | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml| 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client-jobclient/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client-nativetask/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml | 4 ++--
 .../hadoop-mapreduce-client-uploader/pom.xml| 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml| 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml  | 4 ++--
 hadoop-mapreduce-project/pom.xml| 4 ++--
 hadoop-maven-plugins/pom.xml| 2 +-
 hadoop-minicluster/pom.xml  | 4 ++--
 hadoop-project-dist/pom.xml | 4 ++--
 hadoop-project/pom.xml  | 6 +++---
 hadoop-tools/hadoop-aliyun/pom.xml  | 2 +-
 hadoop-tools/hadoop-archive-logs/pom.xml| 4 ++--
 hadoop-tools/hadoop-archives/pom.xml| 4 ++--
 hadoop-tools/hadoop-aws/pom.xml | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml  | 2 +-
 hadoop-tools/hadoop-azure/pom.xml   | 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml| 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml  | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen/pom.xml | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-dist/pom.xml | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/pom.xml| 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/pom.xml | 4

[hadoop] branch trunk updated: YARN-6526. Refactoring SQLFederationStateStore by avoiding to recreate a connection at every call. COntributed by Bilwa S T.

2020-06-26 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2c03524  YARN-6526. Refactoring SQLFederationStateStore by avoiding to 
recreate a connection at every call. COntributed by Bilwa S T.
2c03524 is described below

commit 2c03524fa4be754aa95889d4ac0f5d57dca8cda8
Author: Brahma Reddy Battula 
AuthorDate: Fri Jun 26 20:43:27 2020 +0530

YARN-6526. Refactoring SQLFederationStateStore by avoiding to recreate a 
connection at every call. COntributed by Bilwa S T.
---
 .../store/impl/SQLFederationStateStore.java| 124 ++---
 .../metrics/FederationStateStoreClientMetrics.java |  18 +++
 .../store/utils/FederationStateStoreUtils.java |  14 +++
 .../store/impl/FederationStateStoreBaseTest.java   |  15 ++-
 .../store/impl/HSQLDBFederationStateStore.java |   3 +-
 .../store/impl/TestSQLFederationStateStore.java|  28 +
 .../impl/TestZookeeperFederationStateStore.java|   4 +-
 7 files changed, 130 insertions(+), 76 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java
index 07dc7e4..8ceef43 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/SQLFederationStateStore.java
@@ -78,6 +78,7 @@ import org.apache.hadoop.yarn.util.MonotonicClock;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.zaxxer.hikari.HikariDataSource;
 
 /**
@@ -141,6 +142,8 @@ public class SQLFederationStateStore implements 
FederationStateStore {
   private int maximumPoolSize;
   private HikariDataSource dataSource = null;
   private final Clock clock = new MonotonicClock();
+  @VisibleForTesting
+  Connection conn = null;
 
   @Override
   public void init(Configuration conf) throws YarnException {
@@ -173,6 +176,13 @@ public class SQLFederationStateStore implements 
FederationStateStore {
 dataSource.setMaximumPoolSize(maximumPoolSize);
 LOG.info("Initialized connection pool to the Federation StateStore "
 + "database at address: " + url);
+try {
+  conn = getConnection();
+  LOG.debug("Connection created");
+} catch (SQLException e) {
+  FederationStateStoreUtils.logAndThrowRetriableException(LOG,
+  "Not able to get Connection", e);
+}
   }
 
   @Override
@@ -185,15 +195,13 @@ public class SQLFederationStateStore implements 
FederationStateStore {
 .validate(registerSubClusterRequest);
 
 CallableStatement cstmt = null;
-Connection conn = null;
 
 SubClusterInfo subClusterInfo =
 registerSubClusterRequest.getSubClusterInfo();
 SubClusterId subClusterId = subClusterInfo.getSubClusterId();
 
 try {
-  conn = getConnection();
-  cstmt = conn.prepareCall(CALL_SP_REGISTER_SUBCLUSTER);
+  cstmt = getCallableStatement(CALL_SP_REGISTER_SUBCLUSTER);
 
   // Set the parameters for the stored procedure
   cstmt.setString(1, subClusterId.getId());
@@ -238,9 +246,10 @@ public class SQLFederationStateStore implements 
FederationStateStore {
   + " into the StateStore",
   e);
 } finally {
-  // Return to the pool the CallableStatement and the Connection
-  FederationStateStoreUtils.returnToPool(LOG, cstmt, conn);
+  // Return to the pool the CallableStatement
+  FederationStateStoreUtils.returnToPool(LOG, cstmt);
 }
+
 return SubClusterRegisterResponse.newInstance();
   }
 
@@ -254,14 +263,12 @@ public class SQLFederationStateStore implements 
FederationStateStore {
 .validate(subClusterDeregisterRequest);
 
 CallableStatement cstmt = null;
-Connection conn = null;
 
 SubClusterId subClusterId = subClusterDeregisterRequest.getSubClusterId();
 SubClusterState state = subClusterDeregisterRequest.getState();
 
 try {
-  conn = getConnection();
-  cstmt = conn.prepareCall(CALL_SP_DEREGISTER_SUBCLUSTER);
+  cstmt = getCallableStatement(CALL_SP_DEREGISTER_SUBCLUSTER);
 
   // Set the parameters for the stored procedure
   cstmt.setString(1, subClusterId.getId());
@@ -299,8 +306,8 @@ public class SQLFederationStateStore implements 
FederationStateStore {
   + state.toString(),
  

[hadoop] branch branch-3.3.0 updated: YARN-10247. Application priority queue ACLs are not respected. Contributed by Sunil G

2020-05-03 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch branch-3.3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3.0 by this push:
 new 7e632d5  YARN-10247. Application priority queue ACLs are not 
respected. Contributed by Sunil G
7e632d5 is described below

commit 7e632d54709609ea5deeadb5b73b44119c600aa4
Author: Szilard Nemeth 
AuthorDate: Wed Apr 29 15:53:30 2020 +0200

YARN-10247. Application priority queue ACLs are not respected. Contributed 
by Sunil G

(cherry picked from commit 410c605aec308a2ccd903f60aade3aaeefcaa610)
(cherry picked from commit 8ffe1f313c9719ea550ac524fee84320c4aff63c)
---
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java   | 6 +++---
 .../scheduler/capacity/TestApplicationPriorityACLs.java | 1 +
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 861dc43..cca4fe1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -2686,10 +2686,10 @@ public class CapacityScheduler extends
   }
 
   // Lets check for ACLs here.
-  if (!appPriorityACLManager.checkAccess(user, queuePath, appPriority)) {
+  if (!appPriorityACLManager.checkAccess(user, 
normalizeQueueName(queuePath), appPriority)) {
 throw new YarnException(new AccessControlException(
-"User " + user + " does not have permission to submit/update "
-+ applicationId + " for " + appPriority));
+"User " + user + " does not have permission to submit/update "
++ applicationId + " for " + appPriority));
   }
 
   LOG.info("Priority '" + appPriority.getPriority()
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriorityACLs.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriorityACLs.java
index b41ba83..cf9a010 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriorityACLs.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriorityACLs.java
@@ -143,6 +143,7 @@ public class TestApplicationPriorityACLs extends 
ACLsTestBase {
 .newInstance(appSubmissionContext);
 try {
   submitterClient.submitApplication(submitRequest);
+  Assert.fail();
 } catch (YarnException ex) {
   Assert.assertTrue(ex.getCause() instanceof RemoteException);
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3.0 created (now 7a3f190)

2020-04-24 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch branch-3.3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 7a3f190  YARN-10189. Code cleanup in LeveldbRMStateStore. Contributed 
by Benjamin Teke

No new revisions were added by this update.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 created (now f7a94ec)

2020-03-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at f7a94ec  HDFS-15239. Add button to go to the parent directory in the 
explorer. Contributed by hemanthboyina.

No new revisions were added by this update.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (f7a94ec -> 3eeb246)

2020-03-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from f7a94ec  HDFS-15239. Add button to go to the parent directory in the 
explorer. Contributed by hemanthboyina.
 new 8914cf9  Preparing for 3.4.0 development
 new 3eeb246  upate the hadoop.version property in the root pom.xml and 
hadoop.assemblies.version in hadoop-project/pom.xml (see HADOOP-15369)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 hadoop-assemblies/pom.xml   | 4 ++--
 hadoop-build-tools/pom.xml  | 2 +-
 hadoop-client-modules/hadoop-client-api/pom.xml | 4 ++--
 hadoop-client-modules/hadoop-client-check-invariants/pom.xml| 4 ++--
 hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml   | 4 ++--
 hadoop-client-modules/hadoop-client-integration-tests/pom.xml   | 4 ++--
 hadoop-client-modules/hadoop-client-minicluster/pom.xml | 4 ++--
 hadoop-client-modules/hadoop-client-runtime/pom.xml | 4 ++--
 hadoop-client-modules/hadoop-client/pom.xml | 4 ++--
 hadoop-client-modules/pom.xml   | 2 +-
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml   | 4 ++--
 hadoop-cloud-storage-project/hadoop-cos/pom.xml | 2 +-
 hadoop-cloud-storage-project/pom.xml| 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml| 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml   | 4 ++--
 hadoop-common-project/hadoop-common/pom.xml | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml| 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml| 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml| 4 ++--
 hadoop-common-project/hadoop-registry/pom.xml   | 4 ++--
 hadoop-common-project/pom.xml   | 4 ++--
 hadoop-dist/pom.xml | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml  | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml | 4 ++--
 hadoop-hdfs-project/pom.xml | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml| 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client-jobclient/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client-nativetask/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml | 4 ++--
 .../hadoop-mapreduce-client-uploader/pom.xml| 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml| 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml  | 4 ++--
 hadoop-mapreduce-project/pom.xml| 4 ++--
 hadoop-maven-plugins/pom.xml| 2 +-
 hadoop-minicluster/pom.xml  | 4 ++--
 hadoop-project-dist/pom.xml | 4 ++--
 hadoop-project/pom.xml  | 6 +++---
 hadoop-tools/hadoop-aliyun/pom.xml  | 2 +-
 hadoop-tools/hadoop-archive-logs/pom.xml| 4 ++--
 hadoop-tools/hadoop-archives/pom.xml| 4 ++--
 hadoop-tools/hadoop-aws/pom.xml | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml  | 2 +-
 hadoop-tools/hadoop-azure/pom.xml   | 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml| 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml  | 4 ++--
 hadoop-tools/hadoop-dynamomete

[hadoop] 01/02: Preparing for 3.4.0 development

2020-03-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 8914cf91675c866a506f3804caf0dd53fade31c6
Author: Brahma Reddy Battula 
AuthorDate: Sun Mar 29 23:24:25 2020 +0530

Preparing for 3.4.0 development
---
 hadoop-assemblies/pom.xml | 4 ++--
 hadoop-build-tools/pom.xml| 2 +-
 hadoop-client-modules/hadoop-client-api/pom.xml   | 4 ++--
 hadoop-client-modules/hadoop-client-check-invariants/pom.xml  | 4 ++--
 hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml | 4 ++--
 hadoop-client-modules/hadoop-client-integration-tests/pom.xml | 4 ++--
 hadoop-client-modules/hadoop-client-minicluster/pom.xml   | 4 ++--
 hadoop-client-modules/hadoop-client-runtime/pom.xml   | 4 ++--
 hadoop-client-modules/hadoop-client/pom.xml   | 4 ++--
 hadoop-client-modules/pom.xml | 2 +-
 hadoop-cloud-storage-project/hadoop-cloud-storage/pom.xml | 4 ++--
 hadoop-cloud-storage-project/hadoop-cos/pom.xml   | 2 +-
 hadoop-cloud-storage-project/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-annotations/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-auth-examples/pom.xml| 4 ++--
 hadoop-common-project/hadoop-auth/pom.xml | 4 ++--
 hadoop-common-project/hadoop-common/pom.xml   | 4 ++--
 hadoop-common-project/hadoop-kms/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-minikdc/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-nfs/pom.xml  | 4 ++--
 hadoop-common-project/hadoop-registry/pom.xml | 4 ++--
 hadoop-common-project/pom.xml | 4 ++--
 hadoop-dist/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-client/pom.xml| 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml| 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs-rbf/pom.xml   | 4 ++--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml   | 4 ++--
 hadoop-hdfs-project/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml| 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml  | 4 ++--
 .../hadoop-mapreduce-client-hs-plugins/pom.xml| 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml| 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml | 4 ++--
 .../hadoop-mapreduce-client-nativetask/pom.xml| 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/pom.xml   | 4 ++--
 .../hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml  | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml  | 4 ++--
 hadoop-mapreduce-project/hadoop-mapreduce-examples/pom.xml| 4 ++--
 hadoop-mapreduce-project/pom.xml  | 4 ++--
 hadoop-maven-plugins/pom.xml  | 2 +-
 hadoop-minicluster/pom.xml| 4 ++--
 hadoop-project-dist/pom.xml   | 4 ++--
 hadoop-project/pom.xml| 4 ++--
 hadoop-tools/hadoop-aliyun/pom.xml| 2 +-
 hadoop-tools/hadoop-archive-logs/pom.xml  | 4 ++--
 hadoop-tools/hadoop-archives/pom.xml  | 4 ++--
 hadoop-tools/hadoop-aws/pom.xml   | 4 ++--
 hadoop-tools/hadoop-azure-datalake/pom.xml| 2 +-
 hadoop-tools/hadoop-azure/pom.xml | 2 +-
 hadoop-tools/hadoop-datajoin/pom.xml  | 4 ++--
 hadoop-tools/hadoop-distcp/pom.xml| 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen/pom.xml   | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-dist/pom.xml   | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/pom.xml  | 4 ++--
 hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload/pom.xml   | 4 ++--
 hadoop-tools/hadoop-dynamometer/pom.xml

[hadoop] 02/02: upate the hadoop.version property in the root pom.xml and hadoop.assemblies.version in hadoop-project/pom.xml (see HADOOP-15369)

2020-03-29 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 3eeb2466e963b3ea36f5a5d1ca87e6414d9c4c8c
Author: Brahma Reddy Battula 
AuthorDate: Sun Mar 29 23:39:11 2020 +0530

upate the hadoop.version property in the root pom.xml and 
hadoop.assemblies.version in hadoop-project/pom.xml (see HADOOP-15369)
---
 hadoop-project/pom.xml | 2 +-
 pom.xml| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 3e9b959..c9e7e9b 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -52,7 +52,7 @@
 
 2.4.0
 
-3.3.0-SNAPSHOT
+3.4.0-SNAPSHOT
 1.0.13
 
 ${project.build.directory}/test-dir
diff --git a/pom.xml b/pom.xml
index 2f3d85d..7e94cfb 100644
--- a/pom.xml
+++ b/pom.xml
@@ -80,7 +80,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
https://maven.apache.org/x
 
   
 
-3.3.0-SNAPSHOT
+3.4.0-SNAPSHOT
 
 apache.snapshots.https
 Apache Development Snapshot 
Repository


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle CVE-2019-20444, CVE-2019-16869

2020-03-09 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c6b8a30  HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle 
CVE-2019-20444,CVE-2019-16869
c6b8a30 is described below

commit c6b8a3038646697b77f6db54a2ef6266a9fc7888
Author: Brahma Reddy Battula 
AuthorDate: Mon Mar 9 19:21:58 2020 +0530

HADOOP-16871. Upgrade Netty version to 4.1.45.Final to handle 
CVE-2019-20444,CVE-2019-16869
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 8b07213..77811e3 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -140,7 +140,7 @@
 4.1.0-incubating
 3.2.4
 3.10.6.Final
-4.1.42.Final
+4.1.45.Final
 
 
 0.5.1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10141.Interceptor in FederationInterceptorREST doesnt update on RM switchover. Contributed by D M Murali Krishna Reddy.

2020-02-26 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3a9ccf7  YARN-10141.Interceptor in FederationInterceptorREST doesnt 
update on RM switchover. Contributed by  D M Murali Krishna Reddy.
3a9ccf7 is described below

commit 3a9ccf7f6d91f12ba6db33142cc033e4957e994f
Author: Brahma Reddy Battula 
AuthorDate: Wed Feb 26 23:24:00 2020 +0530

YARN-10141.Interceptor in FederationInterceptorREST doesnt update on RM 
switchover. Contributed by  D M Murali Krishna Reddy.
---
 .../webapp/DefaultRequestInterceptorREST.java  |  4 +++
 .../router/webapp/FederationInterceptorREST.java   |  6 +++-
 .../webapp/TestFederationInterceptorREST.java  | 39 ++
 .../webapp/TestableFederationInterceptorREST.java  |  2 +-
 4 files changed, 49 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java
index 3dc4fdd..c223c08 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java
@@ -81,6 +81,10 @@ public class DefaultRequestInterceptorREST
 this.webAppAddress = webAppAddress;
   }
 
+  protected String getWebAppAddress() {
+return this.webAppAddress;
+  }
+
   protected void setSubClusterId(SubClusterId scId) {
 this.subClusterId = scId;
   }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java
index a1b004c..b14da6c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java
@@ -97,6 +97,7 @@ import 
org.apache.hadoop.yarn.server.webapp.dao.ContainersInfo;
 import org.apache.hadoop.yarn.util.Clock;
 import org.apache.hadoop.yarn.util.MonotonicClock;
 import org.apache.hadoop.yarn.webapp.NotFoundException;
+import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -237,7 +238,10 @@ public class FederationInterceptorREST extends 
AbstractRESTRequestInterceptor {
   SubClusterId subClusterId, String webAppAddress) {
 DefaultRequestInterceptorREST interceptor =
 getInterceptorForSubCluster(subClusterId);
-if (interceptor == null) {
+String webAppAddresswithScheme = WebAppUtils.getHttpSchemePrefix(
+this.getConf()) + webAppAddress;
+if (interceptor == null || !webAppAddresswithScheme.equals(interceptor.
+getWebAppAddress())){
   interceptor = createInterceptorForSubCluster(subClusterId, 
webAppAddress);
 }
 return interceptor;
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java
index 54474e5..b3a7e90 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java
@@ -32,6 +32,9 @@ import org.apache.hadoop.yarn.exceptions.YarnException;
 import 
org.apache.hadoop.yarn.server.federation.policies.manager.UniformBroadcastPolicyManager;
 import 
org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
+import 
org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegisterRequest;
+import

[hadoop] branch trunk updated: YARN-10136. [Router] : Application metrics are hardcode as N/A in UI. Contributed by Bilwa S T.

2020-02-14 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 20add89  YARN-10136. [Router] : Application metrics are hardcode as 
N/A in UI. Contributed by  Bilwa S T.
20add89 is described below

commit 20add897187adf7c836b20c72e347917d01df9aa
Author: Brahma Reddy Battula 
AuthorDate: Fri Feb 14 16:50:29 2020 +0530

YARN-10136. [Router] : Application metrics are hardcode as N/A in UI. 
Contributed by  Bilwa S T.
---
 .../apache/hadoop/yarn/server/router/webapp/AboutBlock.java  | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AboutBlock.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AboutBlock.java
index cd588fc..5dd40cf 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AboutBlock.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AboutBlock.java
@@ -58,12 +58,12 @@ public class AboutBlock extends HtmlBlock {
 YarnConfiguration.DEFAULT_FEDERATION_ENABLED);
 info("Cluster Status").
 __("Federation Enabled", isEnabled).
-__("Applications Submitted", "N/A").
-__("Applications Pending", "N/A").
-__("Applications Running", "N/A").
-__("Applications Failed", "N/A").
-__("Applications Killed", "N/A").
-__("Applications Completed", "N/A").
+__("Applications Submitted", metrics.getAppsSubmitted()).
+__("Applications Pending", metrics.getAppsPending()).
+__("Applications Running", metrics.getAppsRunning()).
+__("Applications Failed", metrics.getAppsFailed()).
+__("Applications Killed", metrics.getAppsKilled()).
+__("Applications Completed", metrics.getAppsCompleted()).
 __("Containers Allocated", metrics.getContainersAllocated()).
 __("Containers Reserved", metrics.getReservedContainers()).
 __("Containers Pending", metrics.getPendingContainers()).


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (b220ec6 -> 719d57b)

2019-06-24 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from b220ec6  YARN-9374.  Improve Timeline service resilience when HBase is 
unavailable. Contributed by Prabhu Joseph and Szilard Nemeth
 new 41c94a6  HDFS-13906. RBF: Add multiple paths for dfsrouteradmin 'rm' 
and 'clrquota' commands. Contributed by Ayush Saxena.
 new b3fee1d  HDFS-14011. RBF: Add more information to HdfsFileStatus for a 
mount point. Contributed by Akira Ajisaka.
 new c5065bf  HDFS-13845. RBF: The default MountTableResolver should fail 
resolving multi-destination paths. Contributed by yanghuafeng.
 new 7b0bc49  HDFS-14024. RBF: ProvidedCapacityTotal json exception in 
NamenodeHeartbeatService. Contributed by CR Hota.
 new 6f2c871  HDFS-12284. RBF: Support for Kerberos authentication. 
Contributed by Sherwood Zheng and Inigo Goiri.
 new ebfd2d8  HDFS-12284. addendum to HDFS-12284. Contributed by Inigo 
Goiri.
 new 04caaba  HDFS-13852. RBF: The DN_REPORT_TIME_OUT and 
DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys. Contributed by 
yanghuafeng.
 new fa55eac  HDFS-13834. RBF: Connection creator thread should catch 
Throwable. Contributed by CR Hota.
 new f4bd111  HDFS-14082. RBF: Add option to fail operations when a 
subcluster is unavailable. Contributed by Inigo Goiri.
 new f2355c7  HDFS-13776. RBF: Add Storage policies related ClientProtocol 
APIs. Contributed by Dibyendu Karmakar.
 new 19088e1  HDFS-14089. RBF: Failed to specify server's Kerberos pricipal 
name in NamenodeHeartbeatService. Contributed by Ranith Sardar.
 new b320cae  HDFS-14085. RBF: LS command for root shows wrong owner and 
permission information. Contributed by Ayush Saxena.
 new 6aa7aab  HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. 
Contributed by Fei Hui.
 new 0ca7142  Revert "HDFS-14114. RBF: MIN_ACTIVE_RATIO should be 
configurable. Contributed by Fei Hui."
 new 94a8dec  HDFS-14114. RBF: MIN_ACTIVE_RATIO should be configurable. 
Contributed by Fei Hui.
 new 01b4126  HDFS-14152. RBF: Fix a typo in RouterAdmin usage. Contributed 
by Ayush Saxena.
 new bbe8591  HDFS-13869. RBF: Handle NPE for 
NamenodeBeanMetrics#getFederationMetrics. Contributed by Ranith Sardar.
 new 3d97142  HDFS-14151. RBF: Make the read-only column of Mount Table 
clearly understandable.
 new 8f6f9d9  HDFS-13443. RBF: Update mount table cache immediately after 
changing (add/update/remove) mount table entries. Contributed by Mohammad 
Arshad.
 new 1dc01e5  HDFS-14167. RBF: Add stale nodes to federation metrics. 
Contributed by Inigo Goiri.
 new f3cbf0e  HDFS-14161. RBF: Throw StandbyException instead of 
IOException so that client can retry when can not get connection. Contributed 
by Fei Hui.
 new 4244653  HDFS-14150. RBF: Quotas of the sub-cluster should be removed 
when removing the mount point. Contributed by Takanobu Asanuma.
 new b8bcbd0  HDFS-14191. RBF: Remove hard coded router status from 
FederationMetrics. Contributed by Ranith Sardar.
 new f4e2bfc  HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin 
-refreshRouterArgs command. Contributed by yanghuafeng.
 new 221f24c  HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo 
Goiri.
 new f40e10b  HDFS-14129. RBF: Create new policy provider for router. 
Contributed by Ranith Sardar.
 new 7b61cbf  HDFS-14129. addendum to HDFS-14129. Contributed by Ranith 
Sardar.
 new c012b09  HDFS-14193. RBF: Inconsistency with the Default Namespace. 
Contributed by Ayush Saxena.
 new 235406d  HDFS-14156. RBF: rollEdit() command fails with Router. 
Contributed by Shubham Dewan.
 new 020f83f  HDFS-14209. RBF: setQuota() through router is working for 
only the mount Points under the Source column in MountTable. Contributed by 
Shubham Dewan.
 new 8b9b58b  HDFS-14223. RBF: Add configuration documents for using 
multiple sub-clusters. Contributed by Takanobu Asanuma.
 new acdf911  HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() 
in case of multiple destinations. Contributed by Ayush Saxena.
 new 9eed3a4  HDFS-14215. RBF: Remove dependency on availability of default 
namespace. Contributed by Ayush Saxena.
 new 559cb11  HDFS-13404. RBF: 
TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails.
 new 9c4e556  HDFS-14225. RBF : MiniRouterDFSCluster should configure the 
failover proxy provider for namespace. Contributed by Ranith Sardar.
 new 912b90f  HDFS-14252. RBF : Exceptions are exposing the actual sub 
cluster path. Contributed by Ayush Saxena.
 new 7e63e37  HDFS-14230. RBF: Throw RetriableException instead of 
IOException when no namenodes available. Contributed by Fei Hui.
 new 75f8b6c  HDFS-13358. RBF: Support for Delegation Token (RPC). 
Contributed by CR Hota.
 new e2a3c44  HDFS-

[hadoop] branch HDFS-13891 updated (caa285b -> 02597b6)

2019-06-23 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 discard caa285b  HDFS-14545. RBF: Router should support 
GetUserMappingsProtocol. Contributed by Ayush Saxena.
 discard bee9fff  HDFS-14550. RBF: Failed to get statistics from NameNodes 
before 2.9.0. Contributed by He Xiaoqiao.
 discard f3e25bb  HDFS-13404. Addendum: RBF: 
TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fail. Contributed 
by Takanobu Asanuma.
 discard 3344c95  HDFS-14526. RBF: Update the document of RBF related metrics. 
Contributed by  Takanobu Asanuma.
 discard d60e686  HDFS-14508. RBF: Clean-up and refactor UI components. 
Contributed by Takanobu Asanuma.
 discard 0bcbdd6  HDFS-13480. RBF: Separate namenodeHeartbeat and 
routerHeartbeat to different config key. Contributed by Ayush Saxena.
 discard f3a8e62  HDFS-13955. RBF: Support secure Namenode in 
NamenodeHeartbeatService. Contributed by CR Hota.
 discard b4ee2e3  HDFS-14475. RBF: Expose router security enabled status on the 
UI. Contributed by CR Hota.
 discard 58a22e9  HDFS-13787. RBF: Add Snapshot related ClientProtocol APIs. 
Contributed by Inigo Goiri.
 discard 04977cc  HDFS-14516. RBF: Create hdfs-rbf-site.xml for RBF specific 
properties. Contributed by Takanobu Asanuma.
 discard 377d7bf  HDFS-13909. RBF: Add Cache pools and directives related 
ClientProtocol APIs. Contributed by Ayush Saxena.
 discard 4c0c9ff  HDFS-13255. RBF: Fail when try to remove mount point paths. 
Contributed by Akira Ajisaka.
 discard 8f1f042  HDFS-14440. RBF: Optimize the file write process in case of 
multiple destinations. Contributed by Ayush Saxena.
 discard 4a16a08  HDFS-13995. RBF: Security documentation. Contributed by CR 
Hota.
 discard a1a28a6  HDFS-14447. RBF: Router should support 
RefreshUserMappingsProtocol. Contributed by Shen Yinjie.
 discard 66f235e  HDFS-14490. RBF: Remove unnecessary quota checks. Contributed 
by Ayush Saxena.
 discard f4101bb  HDFS-14210. RBF: ACL commands should work over all the 
destinations. Contributed by Ayush Saxena.
 discard 6f46691  HDFS-14426. RBF: Add delegation token total count as one of 
the federation metrics. Contributed by Fengnan Li.
 discard 9bbc4de  HDFS-14454. RBF: getContentSummary() should allow 
non-existing folders. Contributed by Inigo Goiri.
 discard 5b3f123  HDFS-14457. RBF: Add order text SPACE in CLI command 'hdfs 
dfsrouteradmin'. Contributed by luhuachao.
 discard 0156a0e  HDFS-13972. RBF: Support for Delegation Token (WebHDFS). 
Contributed by CR Hota.
 discard 7e5d043  HDFS-14422. RBF: Router shouldn't allow READ operations in 
safe mode. Contributed by Inigo Goiri.
 discard bf44a11  HDFS-14369. RBF: Fix trailing / for webhdfs. Contributed by 
Akira Ajisaka.
 discard 54d44bc  HDFS-13853. RBF: RouterAdmin update cmd is overwriting the 
entry not updating the existing. Contributed by Ayush Saxena.
 discard 9c38a6e  HDFS-14316. RBF: Support unavailable subclusters for mount 
points with multiple destinations. Contributed by Inigo Goiri.
 discard 4f66992  HDFS-14388. RBF: Prevent loading metric system when disabled. 
Contributed by Inigo Goiri.
 discard 3def419  HDFS-14351. RBF: Optimize configuration item resolving for 
monitor namenode. Contributed by He Xiaoqiao and Inigo Goiri.
 discard 786a1ce  HDFS-14343. RBF: Fix renaming folders spread across multiple 
subclusters. Contributed by Ayush Saxena.
 discard 0555771  HDFS-14334. RBF: Use human readable format for long numbers 
in the Router UI. Contributed by Inigo Goiri.
 discard abb0e82  HDFS-14335. RBF: Fix heartbeat typos in the Router. 
Contributed by CR Hota.
 discard fc05a96  HDFS-14331. RBF: IOE While Removing Mount Entry. Contributed 
by Ayush Saxena.
 discard 1e50408  HDFS-14329. RBF: Add maintenance nodes to federation metrics. 
Contributed by Ayush Saxena.
 discard b2fcb25  HDFS-14259. RBF: Fix safemode message for Router. Contributed 
by Ranith Sadar.
 discard d3acc68  HDFS-14322. RBF: Security manager should not load if security 
is disabled. Contributed by CR Hota.
 discard 4dd0dd0  HDFS-14052. RBF: Use Router keytab for WebHDFS. Contributed 
by CR Hota.
 discard 200d457  HDFS-14307. RBF: Update tests to use internal Whitebox 
instead of Mockito. Contributed by CR Hota.
 discard 60b8e6e  HDFS-14249. RBF: Tooling to identify the subcluster location 
of a file. Contributed by Inigo Goiri.
 discard 5550323  HDFS-14268. RBF: Fix the location of the DNs in 
getDatanodeReport(). Contributed by Inigo Goiri.
 discard 0a6b4dd  HDFS-14226. RBF: Setting attributes should set on all 
subclusters' directories. Contributed by Ayush Saxena.
 discard ad0bcc1  HDFS-13358. RBF: Support for Delegation Token (RPC). 
Contributed by CR Hota.
 discard 38a12a3  HDFS-14230. RBF: Throw RetriableException instead of 
IOException when no namenodes available. Contributed by Fei Hui.
 discard 7868115  HDFS-14252. RBF : Exceptions are exposing the actual sub

[hadoop] branch HDFS-13891 updated: HDFS-13995. RBF: Security documentation. Contributed by CR Hota.

2019-05-21 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new 4a16a08  HDFS-13995. RBF: Security documentation. Contributed by CR 
Hota.
4a16a08 is described below

commit 4a16a083544297cf68c67be9893932c1d96f02f5
Author: Brahma Reddy Battula 
AuthorDate: Tue May 21 22:48:53 2019 +0530

HDFS-13995. RBF: Security documentation. Contributed by CR Hota.
---
 .../src/site/markdown/HDFSRouterFederation.md  | 22 +-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index 83cecda..d9ae5af 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -169,7 +169,15 @@ It is similar to the mount table in 
[ViewFs](../hadoop-hdfs/ViewFs.html) where i
 
 
 ### Security
-Secure authentication and authorization are not supported yet, so the Router 
will not proxy to Hadoop clusters with security enabled.
+Router supports security similar to [current security 
model](../hadoop-common/SecureMode.html) in HDFS. This feature is available for 
both RPC and Web based calls. It has the capability to proxy to underlying 
secure HDFS clusters.
+
+Similar to Namenode, support exists for both kerberos and token based 
authentication for clients connecting to routers. Router internally relies on 
existing security related configs of `core-site.xml` and `hdfs-site.xml` to 
support this feature. In addition to that, routers need to be configured with 
its own keytab and principal.
+
+For token based authentication, router issues delegation tokens to upstream 
clients without communicating with downstream namenodes. Router uses its own 
credentials to securely proxy to downstream namenode on behalf of upstream real 
user. Router principal has to be configured as a superuser in all secure 
downstream namenodes. Refer [here](../hadoop-common/Superusers.html) to 
configure proxy user for namenode. Along with that, user owning router daemons 
should be configured with the same  [...]
+Router relies on a state store to distribute tokens across all routers. Apart 
from default implementation provided users can plugin their own implementation 
of state store for token management. Default implementation relies on zookeeper 
for token management. Since a large router/zookeeper cluster could potentially 
hold millions of tokens, `jute.maxbuffer` system property that zookeeper 
clients rely on should be appropriately configured in router daemons.
+
+
+See the Apache JIRA ticket 
[HDFS-13532](https://issues.apache.org/jira/browse/HDFS-13532) for more 
information on this feature.
 
 
 Deployment
@@ -444,6 +452,18 @@ Global quota supported in federation.
 | dfs.federation.router.quota.enable | `false` | If `true`, the quota system 
enabled in the Router. In that case, setting or clearing sub-cluster's quota 
directly is not recommended since Router Admin server will override 
sub-cluster's quota with global quota.|
 | dfs.federation.router.quota-cache.update.interval | 60s | How often the 
Router updates quota cache. This setting supports multiple time unit suffixes. 
If no suffix is specified then milliseconds is assumed. |
 
+### Security
+
+Kerberos and Delegation token supported in federation.
+
+| Property | Default | Description|
+|: |: |: |
+| dfs.federation.router.keytab.file |  | The keytab file used by router to 
login as its service principal. The principal name is configured with 
'dfs.federation.router.kerberos.principal'.|
+| dfs.federation.router.kerberos.principal | | The Router service principal. 
This is typically set to router/_h...@realm.tld. Each Router will substitute 
_HOST with its own fully qualified hostname at startup. The _HOST placeholder 
allows using the same configuration setting on all Routers in an HA setup. |
+| dfs.federation.router.kerberos.principal.hostname |  | The hostname for the 
Router containing this configuration file.  Will be different for each machine. 
Defaults to current hostname. |
+| dfs.federation.router.kerberos.internal.spnego.principal | 
`${dfs.web.authentication.kerberos.principal}` | The server principal used by 
the Router for web UI SPNEGO authentication when Kerberos security is enabled. 
This is typically set to HTTP/_h...@realm.tld The SPNEGO server principal 
begins with the prefix HTTP/ by convention. If the value is '*', the web server 
will attempt to login with every principal specified in the keytab file 
'dfs.web.authentication.kerberos.keytab'. |
+| dfs.federation.router.secret.manager.class

[hadoop] branch HDFS-13891 updated (206b082 -> 756e4af)

2019-05-08 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a change to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 discard 206b082  HDFS-14454. RBF: getContentSummary() should allow 
non-existing folders. Contributed by Inigo Goiri.
 discard 893c708  HDFS-14457. RBF: Add order text SPACE in CLI command 'hdfs 
dfsrouteradmin'. Contributed by luhuachao.
 discard 91fea6f  HDFS-13972. RBF: Support for Delegation Token (WebHDFS). 
Contributed by CR Hota.
 discard e8c6c20  HDFS-14422. RBF: Router shouldn't allow READ operations in 
safe mode. Contributed by Inigo Goiri.
 discard 940c0c7  HDFS-14369. RBF: Fix trailing / for webhdfs. Contributed by 
Akira Ajisaka.
 discard 9b2e8d4  HDFS-13853. RBF: RouterAdmin update cmd is overwriting the 
entry not updating the existing. Contributed by Ayush Saxena.
 discard de204c3  HDFS-14316. RBF: Support unavailable subclusters for mount 
points with multiple destinations. Contributed by Inigo Goiri.
 discard b0bc109  HDFS-14388. RBF: Prevent loading metric system when disabled. 
Contributed by Inigo Goiri.
 discard 09f20d0  HDFS-14351. RBF: Optimize configuration item resolving for 
monitor namenode. Contributed by He Xiaoqiao and Inigo Goiri.
 discard de22e9b  HDFS-14343. RBF: Fix renaming folders spread across multiple 
subclusters. Contributed by Ayush Saxena.
 discard 364811a  HDFS-14334. RBF: Use human readable format for long numbers 
in the Router UI. Contributed by Inigo Goiri.
 discard 9e39186  HDFS-14335. RBF: Fix heartbeat typos in the Router. 
Contributed by CR Hota.
 discard 64ca9c0  HDFS-14331. RBF: IOE While Removing Mount Entry. Contributed 
by Ayush Saxena.
 discard baa2cb1  HDFS-14329. RBF: Add maintenance nodes to federation metrics. 
Contributed by Ayush Saxena.
 discard 0ed0226  HDFS-14259. RBF: Fix safemode message for Router. Contributed 
by Ranith Sadar.
 discard 9204211  HDFS-14322. RBF: Security manager should not load if security 
is disabled. Contributed by CR Hota.
 discard 12924f2  HDFS-14052. RBF: Use Router keytab for WebHDFS. Contributed 
by CR Hota.
 discard 8223879  HDFS-14307. RBF: Update tests to use internal Whitebox 
instead of Mockito. Contributed by CR Hota.
 discard 42886a0  HDFS-14249. RBF: Tooling to identify the subcluster location 
of a file. Contributed by Inigo Goiri.
 discard 35a7e46  HDFS-14268. RBF: Fix the location of the DNs in 
getDatanodeReport(). Contributed by Inigo Goiri.
 discard f63b6a2  HDFS-14226. RBF: Setting attributes should set on all 
subclusters' directories. Contributed by Ayush Saxena.
 discard a0138b8  HDFS-13358. RBF: Support for Delegation Token (RPC). 
Contributed by CR Hota.
 discard 92cd6e2  HDFS-14230. RBF: Throw RetriableException instead of 
IOException when no namenodes available. Contributed by Fei Hui.
 discard b3b157e  HDFS-14252. RBF : Exceptions are exposing the actual sub 
cluster path. Contributed by Ayush Saxena.
 discard d6aeb7c  HDFS-14225. RBF : MiniRouterDFSCluster should configure the 
failover proxy provider for namespace. Contributed by Ranith Sardar.
 discard c14bd63  HDFS-13404. RBF: 
TestRouterWebHDFSContractAppend.testRenameFileBeingAppended fails.
 discard 4d7cf87  HDFS-14215. RBF: Remove dependency on availability of default 
namespace. Contributed by Ayush Saxena.
 discard 6fe8017  HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() 
in case of multiple destinations. Contributed by Ayush Saxena.
 discard 9335ec5  HDFS-14223. RBF: Add configuration documents for using 
multiple sub-clusters. Contributed by Takanobu Asanuma.
 discard a9aadbb  HDFS-14209. RBF: setQuota() through router is working for 
only the mount Points under the Source column in MountTable. Contributed by 
Shubham Dewan.
 discard 2e50862  HDFS-14156. RBF: rollEdit() command fails with Router. 
Contributed by Shubham Dewan.
 discard 17b77c9  HDFS-14193. RBF: Inconsistency with the Default Namespace. 
Contributed by Ayush Saxena.
 discard 2d8ebf5  HDFS-14129. addendum to HDFS-14129. Contributed by Ranith 
Sardar.
 discard 50cb4a2  HDFS-14129. RBF: Create new policy provider for router. 
Contributed by Ranith Sardar.
 discard a48c057  HDFS-14206. RBF: Cleanup quota modules. Contributed by Inigo 
Goiri.
 discard b1d250f  HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin 
-refreshRouterArgs command. Contributed by yanghuafeng.
 discard d90d20f  HDFS-14191. RBF: Remove hard coded router status from 
FederationMetrics. Contributed by Ranith Sardar.
 discard d3370f3  HDFS-14150. RBF: Quotas of the sub-cluster should be removed 
when removing the mount point. Contributed by Takanobu Asanuma.
 discard 45895e6  HDFS-14161. RBF: Throw StandbyException instead of 
IOException so that client can retry when can not get connection. Contributed 
by Fei Hui.
 discard 2828b09  HDFS-14167. RBF: Add stale nodes to federation metrics. 
Contributed by Inigo Goiri.
 discard 00f4bb2  HDFS-13443. RBF: Update mount table cache immediately after

[hadoop] branch HDFS-13891 updated: HDFS-13972. RBF: Support for Delegation Token (WebHDFS). Contributed by CR Hota.

2019-04-24 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new 55f2f7a  HDFS-13972. RBF: Support for Delegation Token (WebHDFS). 
Contributed by CR Hota.
55f2f7a is described below

commit 55f2f7aa26652eac7f8f08d8435ec99984a61861
Author: Brahma Reddy Battula 
AuthorDate: Wed Apr 24 19:35:03 2019 +0530

HDFS-13972. RBF: Support for Delegation Token (WebHDFS). Contributed by CR 
Hota.
---
 .../hdfs/server/federation/router/Router.java  |  11 +-
 .../server/federation/router/RouterRpcServer.java  |  20 ++-
 .../federation/router/RouterWebHdfsMethods.java| 159 
 .../router/security/RouterSecurityManager.java |  41 ++
 .../TestRouterHDFSContractDelegationToken.java |   6 +-
 .../security/TestRouterHttpDelegationToken.java| 163 +
 .../security/TestRouterSecurityManager.java|  86 ++-
 7 files changed, 346 insertions(+), 140 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
index 9e18ebf..7f9c597 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
@@ -37,6 +37,8 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.HAUtil;
+import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
+import org.apache.hadoop.hdfs.server.common.TokenVerifier;
 import org.apache.hadoop.hdfs.server.federation.metrics.FederationMetrics;
 import org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
@@ -76,7 +78,8 @@ import com.google.common.annotations.VisibleForTesting;
  */
 @InterfaceAudience.Private
 @InterfaceStability.Evolving
-public class Router extends CompositeService {
+public class Router extends CompositeService implements
+TokenVerifier {
 
   private static final Logger LOG = LoggerFactory.getLogger(Router.class);
 
@@ -470,6 +473,12 @@ public class Router extends CompositeService {
 return null;
   }
 
+  @Override
+  public void verifyToken(DelegationTokenIdentifier tokenId, byte[] password)
+  throws IOException {
+getRpcServer().getRouterSecurityManager().verifyToken(tokenId, password);
+  }
+
   /
   // Namenode heartbeat monitors
   /
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index 3a2f910..d35d1f0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -203,6 +203,9 @@ public class RouterRpcServer extends AbstractService
   private final RouterClientProtocol clientProto;
   /** Router security manager to handle token operations. */
   private RouterSecurityManager securityManager = null;
+  /** Super user credentials that a thread may use. */
+  private static final ThreadLocal CUR_USER =
+  new ThreadLocal<>();
 
   /**
* Construct a router RPC server.
@@ -1514,11 +1517,26 @@ public class RouterRpcServer extends AbstractService
* @throws IOException If we cannot get the user information.
*/
   public static UserGroupInformation getRemoteUser() throws IOException {
-UserGroupInformation ugi = Server.getRemoteUser();
+UserGroupInformation ugi = CUR_USER.get();
+ugi = (ugi != null) ? ugi : Server.getRemoteUser();
 return (ugi != null) ? ugi : UserGroupInformation.getCurrentUser();
   }
 
   /**
+   * Set super user credentials if needed.
+   */
+  static void setCurrentUser(UserGroupInformation ugi) {
+CUR_USER.set(ugi);
+  }
+
+  /**
+   * Reset to discard super user credentials.
+   */
+  static void resetCurrentUser() {
+CUR_USER.set(null);
+  }
+
+  /**
* Merge the outputs from multiple namespaces.
*
* @param  The type of the objects to merge.
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterWebHdfsMethods.java
 
b/hadoop-hdfs-project/hadoop-hd

[hadoop] branch trunk updated: HDFS-13997. Secondary NN Web UI displays nothing, and the console log shows moment is not defined. Contributed by Ayush Saxena

2019-02-28 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7a0db2f  HDFS-13997. Secondary NN Web UI displays nothing, and the 
console log shows moment is not defined. Contributed by Ayush Saxena
7a0db2f is described below

commit 7a0db2f92b0a294418c89343ce4c3c00d43cb5a7
Author: Brahma Reddy Battula 
AuthorDate: Fri Mar 1 12:24:52 2019 +0530

HDFS-13997. Secondary NN Web UI displays nothing, and the console log shows 
moment is not defined. Contributed by Ayush Saxena
---
 hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html
index ff2f7ce..da90d1b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html
@@ -88,6 +88,7 @@
 
 
 
+
 
 
 

[hadoop] branch HDFS-13891 updated: HDFS-14052. RBF: Use Router keytab for WebHDFS. Contributed by CR Hota.

2019-02-25 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new d8dccda  HDFS-14052. RBF: Use Router keytab for WebHDFS. Contributed 
by CR Hota.
d8dccda is described below

commit d8dccdaee5de668f467310a64ade024405054d3f
Author: Brahma Reddy Battula 
AuthorDate: Tue Feb 26 07:42:23 2019 +0530

HDFS-14052. RBF: Use Router keytab for WebHDFS. Contributed by CR Hota.
---
 .../server/federation/router/RouterHttpServer.java |  4 +-
 .../contract/router/web/RouterWebHDFSContract.java | 12 ++--
 .../router/TestRouterWithSecureStartup.java| 69 ++
 3 files changed, 80 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHttpServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHttpServer.java
index d6a5146..300bc07 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHttpServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterHttpServer.java
@@ -88,7 +88,9 @@ public class RouterHttpServer extends AbstractService {
 
 this.httpServer = builder.build();
 
-NameNodeHttpServer.initWebHdfs(conf, httpAddress.getHostName(), null,
+String httpKeytab = conf.get(DFSUtil.getSpnegoKeytabKey(conf,
+RBFConfigKeys.DFS_ROUTER_KEYTAB_FILE_KEY));
+NameNodeHttpServer.initWebHdfs(conf, httpAddress.getHostName(), httpKeytab,
 httpServer, RouterWebHdfsMethods.class.getPackage().getName());
 
 this.httpServer.setAttribute(NAMENODE_ATTRIBUTE_KEY, this.router);
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/web/RouterWebHDFSContract.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/web/RouterWebHDFSContract.java
index 02e9f39..4e205df 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/web/RouterWebHDFSContract.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/fs/contract/router/web/RouterWebHDFSContract.java
@@ -55,16 +55,20 @@ public class RouterWebHDFSContract extends HDFSContract {
   }
 
   public static void createCluster() throws IOException {
+createCluster(new HdfsConfiguration());
+  }
+
+  public static void createCluster(Configuration conf) throws IOException {
 try {
-  HdfsConfiguration conf = new HdfsConfiguration();
   conf.addResource(CONTRACT_HDFS_XML);
   conf.addResource(CONTRACT_WEBHDFS_XML);
 
-  cluster = new MiniRouterDFSCluster(true, 2);
+  cluster = new MiniRouterDFSCluster(true, 2, conf);
 
   // Start NNs and DNs and wait until ready
-  cluster.startCluster();
+  cluster.startCluster(conf);
 
+  cluster.addRouterOverrides(conf);
   // Start routers with only an RPC service
   cluster.startRouters();
 
@@ -85,7 +89,7 @@ public class RouterWebHDFSContract extends HDFSContract {
   cluster.waitActiveNamespaces();
 } catch (Exception e) {
   cluster = null;
-  throw new IOException("Cannot start federated cluster", e);
+  throw new IOException(e.getCause());
 }
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterWithSecureStartup.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterWithSecureStartup.java
new file mode 100644
index 000..7cc2c87
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterWithSecureStartup.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hado

[hadoop] branch HDFS-13891 updated: HDFS-13358. RBF: Support for Delegation Token (RPC). Contributed by CR Hota.

2019-02-13 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new 216490e  HDFS-13358. RBF: Support for Delegation Token (RPC). 
Contributed by CR Hota.
216490e is described below

commit 216490e37d90333f9a800b3440e7fb99d34ec578
Author: Brahma Reddy Battula 
AuthorDate: Thu Feb 14 08:16:45 2019 +0530

HDFS-13358. RBF: Support for Delegation Token (RPC). Contributed by CR Hota.
---
 .../server/federation/router/RBFConfigKeys.java|   9 +
 .../federation/router/RouterClientProtocol.java|  16 +-
 .../server/federation/router/RouterRpcServer.java  |  21 +-
 .../router/security/RouterSecurityManager.java | 239 +
 .../federation/router/security/package-info.java   |  28 +++
 .../token/ZKDelegationTokenSecretManagerImpl.java  |  56 +
 .../router/security/token/package-info.java|  29 +++
 .../src/main/resources/hdfs-rbf-default.xml|  11 +-
 .../fs/contract/router/SecurityConfUtil.java   |   4 +
 .../TestRouterHDFSContractDelegationToken.java | 101 +
 .../security/MockDelegationTokenSecretManager.java |  52 +
 .../security/TestRouterSecurityManager.java|  93 
 12 files changed, 652 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
index 5e907c8..657b6cf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
@@ -28,6 +28,8 @@ import 
org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
 import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
 import 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializerPBImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl;
+import 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager;
+import 
org.apache.hadoop.hdfs.server.federation.router.security.token.ZKDelegationTokenSecretManagerImpl;
 
 import java.util.concurrent.TimeUnit;
 
@@ -294,4 +296,11 @@ public class RBFConfigKeys extends 
CommonConfigurationKeysPublic {
 
   public static final String DFS_ROUTER_KERBEROS_INTERNAL_SPNEGO_PRINCIPAL_KEY 
=
   FEDERATION_ROUTER_PREFIX + "kerberos.internal.spnego.principal";
+
+  // HDFS Router secret manager for delegation token
+  public static final String DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS =
+  FEDERATION_ROUTER_PREFIX + "secret.manager.class";
+  public static final Class
+  DFS_ROUTER_DELEGATION_TOKEN_DRIVER_CLASS_DEFAULT =
+  ZKDelegationTokenSecretManagerImpl.class;
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index 086a0659..78d716e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -77,6 +77,7 @@ import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo
 import 
org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
 import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import 
org.apache.hadoop.hdfs.server.federation.router.security.RouterSecurityManager;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport;
@@ -124,6 +125,8 @@ public class RouterClientProtocol implements ClientProtocol 
{
   private final ErasureCoding erasureCoding;
   /** StoragePolicy calls. **/
   private final RouterStoragePolicy storagePolicy;
+  /** Router security manager to handle token operations. */
+  private RouterSecurityManager securityManager = null;
 
   RouterClientProtocol(Configuration conf, RouterRpcServer rpcServer) {
 this.rpcServer = rpcServer;
@@ -142,13 +145,14 @@ public class RouterClientProtocol implements 
ClientProtocol {
 DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT);
 this.erasureCoding = new ErasureCoding(rpcServer);
 this.storagePolicy = new

[hadoop] branch HDFS-13891 updated: HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() in case of multiple destinations. Contributed by Ayush Saxena.

2019-01-27 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new caceff1  HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() 
in case of multiple destinations. Contributed by Ayush Saxena.
caceff1 is described below

commit caceff1d6033a10b8de55e304bf8b8334d69d120
Author: Brahma Reddy Battula 
AuthorDate: Mon Jan 28 09:03:32 2019 +0530

HDFS-14224. RBF: NPE in getContentSummary() for getEcPolicy() in case of 
multiple destinations. Contributed by Ayush Saxena.
---
 .../server/federation/router/RouterClientProtocol.java   |  7 +++
 .../federation/router/TestRouterRpcMultiDestination.java | 16 
 2 files changed, 23 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
index c724b17..2d52ecb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java
@@ -1628,6 +1628,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
 long quota = 0;
 long spaceConsumed = 0;
 long spaceQuota = 0;
+String ecPolicy = "";
 
 for (ContentSummary summary : summaries) {
   length += summary.getLength();
@@ -1636,6 +1637,11 @@ public class RouterClientProtocol implements 
ClientProtocol {
   quota += summary.getQuota();
   spaceConsumed += summary.getSpaceConsumed();
   spaceQuota += summary.getSpaceQuota();
+  // We return from the first response as we assume that the EC policy
+  // of each sub-cluster is same.
+  if (ecPolicy.isEmpty()) {
+ecPolicy = summary.getErasureCodingPolicy();
+  }
 }
 
 ContentSummary ret = new ContentSummary.Builder()
@@ -1645,6 +1651,7 @@ public class RouterClientProtocol implements 
ClientProtocol {
 .quota(quota)
 .spaceConsumed(spaceConsumed)
 .spaceQuota(spaceQuota)
+.erasureCodingPolicy(ecPolicy)
 .build();
 return ret;
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
index 3101748..3d941bb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcMultiDestination.java
@@ -41,6 +41,7 @@ import java.util.TreeSet;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.hdfs.protocol.DirectoryListing;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
@@ -230,6 +231,21 @@ public class TestRouterRpcMultiDestination extends 
TestRouterRpc {
   }
 
   @Test
+  public void testGetContentSummaryEc() throws Exception {
+DistributedFileSystem routerDFS =
+(DistributedFileSystem) getRouterFileSystem();
+Path dir = new Path("/");
+String expectedECPolicy = "RS-6-3-1024k";
+try {
+  routerDFS.setErasureCodingPolicy(dir, expectedECPolicy);
+  assertEquals(expectedECPolicy,
+  routerDFS.getContentSummary(dir).getErasureCodingPolicy());
+} finally {
+  routerDFS.unsetErasureCodingPolicy(dir);
+}
+  }
+
+  @Test
   public void testSubclusterDown() throws Exception {
 final int totalFiles = 6;
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDFS-13891 updated: HDFS-14223. RBF: Add configuration documents for using multiple sub-clusters. Contributed by Takanobu Asanuma.

2019-01-24 Thread brahma
This is an automated email from the ASF dual-hosted git repository.

brahma pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new b1d9ff4  HDFS-14223. RBF: Add configuration documents for using 
multiple sub-clusters. Contributed by Takanobu Asanuma.
b1d9ff4 is described below

commit b1d9ff40549ce97f702b33ee4ffa41fde0e65724
Author: Brahma Reddy Battula 
AuthorDate: Fri Jan 25 11:28:48 2019 +0530

HDFS-14223. RBF: Add configuration documents for using multiple 
sub-clusters. Contributed by Takanobu Asanuma.
---
 .../hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml| 3 ++-
 .../hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md  | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
index 20ae778..afe3ad1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml
@@ -275,7 +275,8 @@
 dfs.federation.router.file.resolver.client.class
 
org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver
 
-  Class to resolve files to subclusters.
+  Class to resolve files to subclusters. To enable multiple subclusters 
for a mount point,
+  set to 
org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver.
 
   
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index bcf8fa9..2ae0c2b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -404,7 +404,7 @@ Forwarding client requests to the right subcluster.
 
 | Property | Default | Description|
 |: |: |: |
-| dfs.federation.router.file.resolver.client.class | 
`org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver` | Class 
to resolve files to subclusters. |
+| dfs.federation.router.file.resolver.client.class | 
`org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver` | Class 
to resolve files to subclusters. To enable multiple subclusters for a mount 
point, set to 
org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver.
 |
 | dfs.federation.router.namenode.resolver.client.class | 
`org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver` 
| Class to resolve the namenode for a subcluster. |
 
 ### Namenode monitoring


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[2/2] hadoop git commit: HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by Ayush Saxena.

2018-11-29 Thread brahma
HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by 
Ayush Saxena.

(cherry picked from commit f534736867eed962899615ca1b7eb68bcf591d17)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e2fa9e8c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e2fa9e8c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e2fa9e8c

Branch: refs/heads/branch-3.2
Commit: e2fa9e8cddb95789e210e0400a38a676242de968
Parents: a8f67ad
Author: Brahma Reddy Battula 
Authored: Fri Nov 30 00:18:27 2018 +0530
Committer: Brahma Reddy Battula 
Committed: Fri Nov 30 00:28:04 2018 +0530

--
 .../hadoop/hdfs/DFSOpsCountStatistics.java  |  9 +++
 .../hadoop/hdfs/DistributedFileSystem.java  | 18 ++
 .../hadoop/hdfs/TestDistributedFileSystem.java  | 63 +++-
 3 files changed, 89 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2fa9e8c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
index 3dcf13b..b9852ba 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
@@ -41,6 +41,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 
   /** This is for counting distributed file system operations. */
   public enum OpType {
+ADD_EC_POLICY("op_add_ec_policy"),
 ALLOW_SNAPSHOT("op_allow_snapshot"),
 APPEND(CommonStatisticNames.OP_APPEND),
 CONCAT("op_concat"),
@@ -51,10 +52,15 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 CREATE_SYM_LINK("op_create_symlink"),
 DELETE(CommonStatisticNames.OP_DELETE),
 DELETE_SNAPSHOT("op_delete_snapshot"),
+DISABLE_EC_POLICY("op_disable_ec_policy"),
 DISALLOW_SNAPSHOT("op_disallow_snapshot"),
+ENABLE_EC_POLICY("op_enable_ec_policy"),
 EXISTS(CommonStatisticNames.OP_EXISTS),
 GET_BYTES_WITH_FUTURE_GS("op_get_bytes_with_future_generation_stamps"),
 GET_CONTENT_SUMMARY(CommonStatisticNames.OP_GET_CONTENT_SUMMARY),
+GET_EC_CODECS("op_get_ec_codecs"),
+GET_EC_POLICY("op_get_ec_policy"),
+GET_EC_POLICIES("op_get_ec_policies"),
 GET_FILE_BLOCK_LOCATIONS("op_get_file_block_locations"),
 GET_FILE_CHECKSUM(CommonStatisticNames.OP_GET_FILE_CHECKSUM),
 GET_FILE_LINK_STATUS("op_get_file_link_status"),
@@ -76,11 +82,13 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 REMOVE_ACL(CommonStatisticNames.OP_REMOVE_ACL),
 REMOVE_ACL_ENTRIES(CommonStatisticNames.OP_REMOVE_ACL_ENTRIES),
 REMOVE_DEFAULT_ACL(CommonStatisticNames.OP_REMOVE_DEFAULT_ACL),
+REMOVE_EC_POLICY("op_remove_ec_policy"),
 REMOVE_XATTR("op_remove_xattr"),
 RENAME(CommonStatisticNames.OP_RENAME),
 RENAME_SNAPSHOT("op_rename_snapshot"),
 RESOLVE_LINK("op_resolve_link"),
 SET_ACL(CommonStatisticNames.OP_SET_ACL),
+SET_EC_POLICY("op_set_ec_policy"),
 SET_OWNER(CommonStatisticNames.OP_SET_OWNER),
 SET_PERMISSION(CommonStatisticNames.OP_SET_PERMISSION),
 SET_REPLICATION("op_set_replication"),
@@ -90,6 +98,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 GET_SNAPSHOT_DIFF("op_get_snapshot_diff"),
 GET_SNAPSHOTTABLE_DIRECTORY_LIST("op_get_snapshottable_directory_list"),
 TRUNCATE(CommonStatisticNames.OP_TRUNCATE),
+UNSET_EC_POLICY("op_unset_ec_policy"),
 UNSET_STORAGE_POLICY("op_unset_storage_policy");
 
 private static final Map SYMBOL_MAP =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2fa9e8c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index ca1546c..7dd02bd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/

  1   2   3   4   5   6   7   8   >