(hadoop) branch trunk updated: HADOOP-15760. Upgrade commons-collections to commons-collections4 (#7006)

2024-09-24 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e602c601ddd HADOOP-15760. Upgrade commons-collections to 
commons-collections4 (#7006)
e602c601ddd is described below

commit e602c601ddd98edf098dd9f13811846756cac9c3
Author: Nihal Jain 
AuthorDate: Tue Sep 24 21:20:22 2024 +0530

HADOOP-15760. Upgrade commons-collections to commons-collections4 (#7006)


This moves Hadoop to Apache commons-collections4.

Apache commons-collections has been removed and is completely banned from 
the source code.

Contributed by Nihal Jain
---
 LICENSE-binary|  3 +--
 .../hadoop-client-minicluster/pom.xml |  4 
 hadoop-common-project/hadoop-auth/pom.xml |  4 
 hadoop-common-project/hadoop-common/pom.xml   | 10 --
 .../java/org/apache/hadoop/conf/Configuration.java|  6 +++---
 .../src/main/java/org/apache/hadoop/fs/FileUtil.java  |  2 +-
 .../hadoop/security/JniBasedUnixGroupsMapping.java|  2 +-
 .../java/org/apache/hadoop/hdfs/DFSUtilClient.java|  2 +-
 .../token/delegation/DelegationTokenIdentifier.java   |  2 +-
 .../hadoop/hdfs/shortcircuit/ShortCircuitCache.java   |  2 +-
 .../federation/resolver/order/RandomResolver.java |  2 +-
 .../server/federation/metrics/TestRBFMetrics.java |  2 +-
 .../hadoop/hdfs/server/datanode/DirectoryScanner.java |  2 +-
 .../fsdataset/impl/ReplicaCachingGetSpaceUsed.java|  2 +-
 .../hadoop/fs/TestEnhancedByteBufferAccess.java   |  2 +-
 .../namenode/snapshot/TestSnapshotDiffReport.java |  2 +-
 .../hdfs/shortcircuit/TestShortCircuitCache.java  |  2 +-
 .../main/java/org/apache/hadoop/mapred/Counters.java  |  2 +-
 .../hadoop-mapreduce-client/pom.xml   |  4 ++--
 hadoop-project/pom.xml| 19 +++
 .../hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java |  2 +-
 .../fs/aliyun/oss/AliyunOSSFileSystemStore.java   |  2 +-
 .../hadoop/fs/s3a/impl/CopyFromLocalOperation.java|  2 +-
 .../java/org/apache/hadoop/yarn/sls/SLSRunner.java|  2 +-
 .../hadoop-yarn-applications-catalog-webapp/pom.xml   |  8 
 .../org/apache/hadoop/yarn/client/cli/RouterCLI.java  |  4 ++--
 .../pom.xml   |  4 ++--
 .../yarn/server/timeline/LeveldbTimelineStore.java|  2 +-
 .../server/timeline/RollingLevelDBTimelineStore.java  |  2 +-
 .../server/timeline/security/TimelineACLsManager.java |  2 +-
 .../amrmproxy/LocalityMulticastAMRMProxyPolicy.java   |  2 +-
 .../federation/policies/dao/WeightedPolicyInfo.java   |  2 +-
 .../utils/FederationPolicyStoreInputValidator.java|  2 +-
 .../federation/utils/FederationRegistryClient.java|  2 +-
 .../federation/utils/FederationStateStoreFacade.java  |  4 ++--
 .../policygenerator/LoadBasedGlobalPolicy.java|  2 +-
 .../yarn/server/resourcemanager/NodesListManager.java |  2 +-
 .../resourcemanager/ResourceTrackerService.java   |  2 +-
 .../ProportionalCapacityPreemptionPolicy.java |  2 +-
 .../server/resourcemanager/rmnode/RMNodeImpl.java |  2 +-
 .../scheduler/activities/ActivitiesManager.java   |  2 +-
 .../scheduler/activities/AppAllocation.java   |  2 +-
 .../CapacitySchedulerQueueCapacityHandler.java|  2 +-
 .../scheduler/placement/AppPlacementAllocator.java|  2 +-
 .../scheduler/placement/MultiNodeSortingManager.java  |  2 +-
 .../hadoop/yarn/server/router/RouterServerUtil.java   |  2 +-
 .../router/rmadmin/FederationRMAdminInterceptor.java  |  4 ++--
 .../hadoop/yarn/server/router/webapp/AppsBlock.java   |  2 +-
 .../router/webapp/FederationInterceptorREST.java  |  2 +-
 .../hadoop/yarn/server/router/webapp/NodesBlock.java  |  2 +-
 .../hadoop/yarn/server/router/webapp/RouterBlock.java |  2 +-
 .../router/clientrm/TestRouterYarnClientUtils.java|  2 +-
 .../router/clientrm/TestSequentialRouterPolicy.java   |  2 +-
 .../rmadmin/TestableFederationRMAdminInterceptor.java |  2 +-
 .../yarn/server/router/secure/TestSecureLogins.java   |  2 +-
 .../router/subcluster/TestFederationSubCluster.java   |  2 +-
 .../pom.xml   |  4 
 57 files changed, 101 insertions(+), 65 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 39c567fb01f..ada9deaff23 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -246,7 +246,7 @@ com.zaxxer:HikariCP:4.0.3
 commons-beanutils:commons-beanutils:1.9.4
 commons-cli:commons-cli:1.5.0
 commons-codec:commons-codec:1.15
-commons-collections:commons-collections:3.2.2
+org.apache.commons:commons-collections4:4.4
 commons-daemon:commons-daemon:1.0.13
 commons-io:commons-io:2.16.1
 commons-net:commons-net:3.9.0
@@ -299,7 +299,6 @@ net.java.dev.jna:jna:5.2.0

(hadoop) branch branch-3.4 updated: HADOOP-19272. S3A: AWS SDK 2.25.53 warnings logged by transfer manager (#7048)

2024-09-19 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new f3600d28ab9 HADOOP-19272. S3A: AWS SDK 2.25.53 warnings logged by 
transfer manager (#7048)
f3600d28ab9 is described below

commit f3600d28ab9a396e628034bad3aabfa212fe4339
Author: Steve Loughran 
AuthorDate: Thu Sep 19 13:50:06 2024 +0100

HADOOP-19272. S3A: AWS SDK 2.25.53 warnings logged by transfer manager 
(#7048)

Disables all logging below error in the AWS SDK Transfer Manager.

This is done in ClientManagerImpl construction so is automatically done
during S3A FS initialization.

ITests verify that
* It is possible to restore the warning log. This verifies the validity of
  the test suite, and will identify when an SDK update fixes this 
regression.
* Constructing an S3A FS instance will disable the logging.

The log manipulation code is lifted from Cloudstore, where it was used to
dynamically enable logging. It uses reflection to load the Log4J binding;
all uses of the API catch and swallow exceptions.
This is needed to avoid failures when running against different log backends

This is an emergency fix -we could come up with a better design for
the reflection based code using the new DynMethods classes.
But this is based on working code, which is always good.

Contributed by Steve Loughran
---
 .../hadoop/fs/s3a/impl/AwsSdkWorkarounds.java  |  59 ++
 .../hadoop/fs/s3a/impl/ClientManagerImpl.java  |   6 +
 .../fs/s3a/impl/logging/Log4JController.java   |  52 +
 .../hadoop/fs/s3a/impl/logging/LogControl.java |  92 +
 .../fs/s3a/impl/logging/LogControllerFactory.java  |  98 ++
 .../hadoop/fs/s3a/impl/logging/package-info.java   |  26 +++
 .../hadoop/fs/s3a/impl/ITestAwsSdkWorkarounds.java | 160 +++
 .../s3a/impl/logging/TestLogControllerFactory.java | 214 +
 8 files changed, 707 insertions(+)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/AwsSdkWorkarounds.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/AwsSdkWorkarounds.java
new file mode 100644
index 000..a0673b123b2
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/AwsSdkWorkarounds.java
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.s3a.impl.logging.LogControl;
+import org.apache.hadoop.fs.s3a.impl.logging.LogControllerFactory;
+
+/**
+ * This class exists to support workarounds for parts of the AWS SDK
+ * which have caused problems.
+ */
+public final class AwsSdkWorkarounds {
+
+  /**
+   * Transfer manager log name. See HADOOP-19272.
+   * {@value}.
+   */
+  public static final String TRANSFER_MANAGER =
+  "software.amazon.awssdk.transfer.s3.S3TransferManager";
+
+  private AwsSdkWorkarounds() {
+  }
+
+  /**
+   * Prepare logging before creating AWS clients.
+   * @return true if the log tuning operation took place.
+   */
+  public static boolean prepareLogging() {
+return LogControllerFactory.createController().
+setLogLevel(TRANSFER_MANAGER, LogControl.LogLevel.ERROR);
+  }
+
+  /**
+   * Restore all noisy logs to INFO.
+   * @return true if the restoration operation took place.
+   */
+  @VisibleForTesting
+  static boolean restoreNoisyLogging() {
+return LogControllerFactory.createController().
+setLogLevel(TRANSFER_MANAGER, LogControl.LogLevel.INFO);
+  }
+}
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ClientManagerImpl.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ClientManagerImpl.java
index 4b2fc1c599b..24c37cc564a 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ClientManagerImpl.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java

(hadoop) branch trunk updated (d1311e52f78 -> ee2e5ac4e41)

2024-09-19 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from d1311e52f78 YARN-11709. NodeManager should be marked unhealthy on 
localizer config issues (#7043)
 add ee2e5ac4e41 HADOOP-19272. S3A: AWS SDK 2.25.53 warnings logged by 
transfer manager (#7048)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/fs/s3a/impl/AwsSdkWorkarounds.java  |  59 ++
 .../hadoop/fs/s3a/impl/ClientManagerImpl.java  |   6 +
 .../fs/s3a/impl/logging/Log4JController.java   |  52 +
 .../hadoop/fs/s3a/impl/logging/LogControl.java |  92 +
 .../fs/s3a/impl/logging/LogControllerFactory.java  |  98 ++
 .../hadoop/fs/s3a/impl/logging}/package-info.java  |   8 +-
 .../hadoop/fs/s3a/impl/ITestAwsSdkWorkarounds.java | 160 +++
 .../s3a/impl/logging/TestLogControllerFactory.java | 214 +
 8 files changed, 686 insertions(+), 3 deletions(-)
 create mode 100644 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/AwsSdkWorkarounds.java
 create mode 100644 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/logging/Log4JController.java
 create mode 100644 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/logging/LogControl.java
 create mode 100644 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/logging/LogControllerFactory.java
 copy 
{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service
 => 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/logging}/package-info.java
 (85%)
 create mode 100644 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestAwsSdkWorkarounds.java
 create mode 100644 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/logging/TestLogControllerFactory.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4.1 updated: Revert "HADOOP-19195. S3A: Upgrade aws sdk v2 to 2.25.53 (#6900)"

2024-09-16 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4.1 by this push:
 new a9fb087ff41 Revert "HADOOP-19195. S3A: Upgrade aws sdk v2 to 2.25.53 
(#6900)"
a9fb087ff41 is described below

commit a9fb087ff4107eb49b7b12456ebccd4a03bd2d89
Author: Steve Loughran 
AuthorDate: Mon Sep 16 14:21:34 2024 +0100

Revert "HADOOP-19195. S3A: Upgrade aws sdk v2 to 2.25.53 (#6900)"

This reverts commit fc86a52c884f15b2f2fb401bbf0baaa36a057651.

This rollback is due to:

HADOOP-19272. S3A: AWS SDK 2.25.53 warnings logged about
  transfer manager not using CRT client

Change-Id: I324f75d62daa02650ff9d199a2e0fc465a2ea28a
---
 LICENSE-binary | 2 +-
 hadoop-project/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 3e5698dea96..b064b6a15d1 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -362,7 +362,7 @@ org.objenesis:objenesis:2.6
 org.xerial.snappy:snappy-java:1.1.10.4
 org.yaml:snakeyaml:2.0
 org.wildfly.openssl:wildfly-openssl:1.1.3.Final
-software.amazon.awssdk:bundle:2.25.53
+software.amazon.awssdk:bundle:jar:2.24.6
 
 
 

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 6416cc8b7dc..bcba56eced6 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -187,7 +187,7 @@
 1.0-beta-1
 900
 1.12.720
-2.25.53
+2.24.6
 1.0.1
 2.7.1
 1.11.2


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4.1 updated: HADOOP-19201. S3A. Support external-id in assume role (#6876)

2024-09-13 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4.1 by this push:
 new 87d9bb60229 HADOOP-19201. S3A. Support external-id in assume role 
(#6876)
87d9bb60229 is described below

commit 87d9bb602290063f324ca09702bb030bd3fbbba6
Author: Smith Cruise 
AuthorDate: Tue Sep 10 22:38:32 2024 +0800

HADOOP-19201. S3A. Support external-id in assume role (#6876)

The option fs.s3a.assumed.role.external.id sets the
external id for calls of AssumeRole to the STS service

Contributed by Smith Cruise
---
 .../src/main/java/org/apache/hadoop/fs/s3a/Constants.java | 5 +
 .../apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java  | 5 +
 .../src/site/markdown/tools/hadoop-aws/assumed_roles.md   | 8 
 3 files changed, 18 insertions(+)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
index 5ce1b49864a..7e614bc11d6 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
@@ -94,6 +94,11 @@ public final class Constants {
   public static final String ASSUMED_ROLE_ARN =
   "fs.s3a.assumed.role.arn";
 
+  /**
+   * external id for assume role request: {@value}.
+   */
+  public static final String ASSUMED_ROLE_EXTERNAL_ID = 
"fs.s3a.assumed.role.external.id";
+
   /**
* Session name for the assumed role, must be valid characters according
* to the AWS APIs: {@value}.
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java
index c2ac8fe4c81..ce20684feca 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java
@@ -125,6 +125,7 @@ public final class AssumedRoleCredentialProvider implements 
AwsCredentialsProvid
 duration = conf.getTimeDuration(ASSUMED_ROLE_SESSION_DURATION,
 ASSUMED_ROLE_SESSION_DURATION_DEFAULT, TimeUnit.SECONDS);
 String policy = conf.getTrimmed(ASSUMED_ROLE_POLICY, "");
+String externalId = conf.getTrimmed(ASSUMED_ROLE_EXTERNAL_ID, "");
 
 LOG.debug("{}", this);
 
@@ -132,6 +133,10 @@ public final class AssumedRoleCredentialProvider 
implements AwsCredentialsProvid
 AssumeRoleRequest.builder().roleArn(arn).roleSessionName(sessionName)
 .durationSeconds((int) duration);
 
+if (StringUtils.isNotEmpty(externalId)) {
+  requestBuilder.externalId(externalId);
+}
+
 if (StringUtils.isNotEmpty(policy)) {
   LOG.debug("Scope down policy {}", policy);
   requestBuilder.policy(policy);
diff --git 
a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md 
b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md
index 065a757f217..ba1bc4b362c 100644
--- 
a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md
+++ 
b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/assumed_roles.md
@@ -153,6 +153,14 @@ Here are the full set of configuration options.
   
 
 
+
+  fs.s3a.assumed.role.external.id
+  arbitrary value, specific by user in AWS console
+  
+External id for assumed role, it's an optional configuration. 
"https://aws.amazon.com/cn/blogs/security/how-to-use-external-id-when-granting-access-to-your-aws-resources/";
+  
+
+
 
   fs.s3a.assumed.role.policy
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated (8c41fbcaf54 -> 6881d12da4b)

2024-09-09 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 8c41fbcaf54 Revert "YARN-11709. NodeManager should be shut down or 
blacklisted when it ca…" (#7028)
 add 6881d12da4b HADOOP-19262: Upgrade wildfly-openssl:1.1.3.Final to 
2.1.4.Final to support Java17+ (#7026)

No new revisions were added by this update.

Summary of changes:
 LICENSE-binary | 2 +-
 hadoop-project/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4.1 updated: HADOOP-19252. Upgrade hadoop-thirdparty from 1.2.0 to 1.3.0 (#7007) (#7014)

2024-09-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4.1 by this push:
 new 5db50fb995b HADOOP-19252. Upgrade hadoop-thirdparty from 1.2.0 to 
1.3.0 (#7007) (#7014)
5db50fb995b is described below

commit 5db50fb995b29bb301546b98866beb7b48cc8405
Author: Steve Loughran 
AuthorDate: Thu Sep 5 20:53:13 2024 +0100

HADOOP-19252. Upgrade hadoop-thirdparty from 1.2.0 to 1.3.0 (#7007) (#7014)

Update the version of hadoop-thirdparty to 1.3.0
across all shaded artifacts used.

This synchronizes the shaded protobuf library with those of
all other shaded artifacts (guava, avro)

Note: this patch moves from 1.2.0; the trunk PR moves from
1.3.0-SNAPSHOT and is slightly different

Contributed by Steve Loughran
---
 LICENSE-binary  | 15 ---
 hadoop-common-project/hadoop-common/pom.xml |  2 +-
 hadoop-project/pom.xml  |  6 +++---
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml |  6 +++---
 4 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 887e7070967..3e5698dea96 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -232,19 +232,19 @@ com.google:guice:4.0
 com.google:guice-servlet:4.0
 com.google.api.grpc:proto-google-common-protos:1.0.0
 com.google.code.gson:2.9.0
-com.google.errorprone:error_prone_annotations:2.2.0
-com.google.j2objc:j2objc-annotations:1.1
+com.google.errorprone:error_prone_annotations:2.5.1
+com.google.j2objc:j2objc-annotations:1.3
 com.google.json-simple:json-simple:1.1.1
 com.google.guava:failureaccess:1.0
 com.google.guava:guava:20.0
-com.google.guava:guava:27.0-jre
+com.google.guava:guava:32.0.1-jre
 com.google.guava:listenablefuture:.0-empty-to-avoid-conflict-with-guava
 com.microsoft.azure:azure-storage:7.0.0
 com.nimbusds:nimbus-jose-jwt:9.37.2
 com.zaxxer:HikariCP:4.0.3
 commons-beanutils:commons-beanutils:1.9.4
 commons-cli:commons-cli:1.5.0
-commons-codec:commons-codec:1.11
+commons-codec:commons-codec:1.15
 commons-collections:commons-collections:3.2.2
 commons-daemon:commons-daemon:1.0.13
 commons-io:commons-io:2.16.1
@@ -297,6 +297,7 @@ javax.inject:javax.inject:1
 net.java.dev.jna:jna:5.2.0
 net.minidev:accessors-smart:1.2
 org.apache.avro:avro:1.9.2
+org.apache.avro:avro:1.11.3
 org.apache.commons:commons-collections4:4.2
 org.apache.commons:commons-compress:1.26.1
 org.apache.commons:commons-configuration2:2.10.1
@@ -361,7 +362,7 @@ org.objenesis:objenesis:2.6
 org.xerial.snappy:snappy-java:1.1.10.4
 org.yaml:snakeyaml:2.0
 org.wildfly.openssl:wildfly-openssl:1.1.3.Final
-software.amazon.awssdk:bundle:jar:2.25.53
+software.amazon.awssdk:bundle:2.25.53
 
 
 

@@ -394,7 +395,7 @@ 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/d3-3.5.17.min.js
 leveldb v1.13
 
 com.google.protobuf:protobuf-java:2.5.0
-com.google.protobuf:protobuf-java:3.6.1
+com.google.protobuf:protobuf-java:3.25.3
 com.google.re2j:re2j:1.1
 com.jcraft:jsch:0.1.55
 com.thoughtworks.paranamer:paranamer:2.3
@@ -484,7 +485,7 @@ com.microsoft.sqlserver:mssql-jdbc:6.2.1.jre7
 org.bouncycastle:bcpkix-jdk18on:1.78.1
 org.bouncycastle:bcprov-jdk18on:1.78.1
 org.bouncycastle:bcutil-jdk18on:1.78.1
-org.checkerframework:checker-qual:2.5.2
+org.checkerframework:checker-qual:3.8.0
 org.codehaus.mojo:animal-sniffer-annotations:1.21
 org.jruby.jcodings:jcodings:1.0.13
 org.jruby.joni:joni:2.1.2
diff --git a/hadoop-common-project/hadoop-common/pom.xml 
b/hadoop-common-project/hadoop-common/pom.xml
index 31fd2923e99..bd0e6a3edc1 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -40,7 +40,7 @@
   
 
   org.apache.hadoop.thirdparty
-  hadoop-shaded-protobuf_3_21
+  hadoop-shaded-protobuf_3_25
 
 
   org.apache.hadoop
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 780b1461857..6416cc8b7dc 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -93,10 +93,10 @@
 
 
${common.protobuf2.scope}
 
-3.21.12
+3.23.4
 ${env.HADOOP_PROTOC_PATH}
 
-1.2.0
+1.3.0
 
${hadoop-thirdparty.version}
 
${hadoop-thirdparty.version}
 
org.apache.hadoop.thirdparty
@@ -250,7 +250,7 @@
   
   
 org.apache.hadoop.thirdparty
-hadoop-shaded-protobuf_3_21
+hadoop-shaded-protobuf_3_25
 ${hadoop-thirdparty-protobuf.version}
   
   
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
index 473078225e0..2b49a0ada12 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
+++ b/hadoop-yarn-project/hadoop

(hadoop) branch branch-3.4.1 updated: HADOOP-18938. S3A: Fix endpoint region parsing for vpc endpoints. (#6466)

2024-09-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4.1 by this push:
 new 6abd818c1c9 HADOOP-18938. S3A: Fix endpoint region parsing for vpc 
endpoints. (#6466)
6abd818c1c9 is described below

commit 6abd818c1c9c38e3452bbe2dd2e1b23bb7cd573b
Author: Shintaro Onuma <31045635+shintaroon...@users.noreply.github.com>
AuthorDate: Thu Sep 5 14:14:04 2024 +0100

HADOOP-18938. S3A: Fix endpoint region parsing for vpc endpoints. (#6466)

Contributed by Shintaro Onuma
---
 .../hadoop/fs/s3a/DefaultS3ClientFactory.java  | 16 +++-
 .../hadoop/fs/s3a/ITestS3AEndpointRegion.java  | 13 ++-
 .../hadoop/fs/s3a/TestS3AEndpointParsing.java  | 43 ++
 3 files changed, 70 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
index 7f6978e8e92..4b3db999247 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
@@ -21,6 +21,8 @@ package org.apache.hadoop.fs.s3a;
 import java.io.IOException;
 import java.net.URI;
 import java.net.URISyntaxException;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
 
 import org.apache.hadoop.classification.VisibleForTesting;
 import org.apache.hadoop.fs.s3a.impl.AWSClientConfig;
@@ -82,6 +84,9 @@ public class DefaultS3ClientFactory extends Configured
 
   private static final String S3_SERVICE_NAME = "s3";
 
+  private static final Pattern VPC_ENDPOINT_PATTERN =
+  
Pattern.compile("^(?:.+\\.)?([a-z0-9-]+)\\.vpce\\.amazonaws\\.(?:com|com\\.cn)$");
+
   /**
* Subclasses refer to this.
*/
@@ -380,10 +385,19 @@ public class DefaultS3ClientFactory extends Configured
* @param endpointEndsWithCentral true if the endpoint is configured as 
central.
* @return the S3 region, null if unable to resolve from endpoint.
*/
-  private static Region getS3RegionFromEndpoint(final String endpoint,
+  @VisibleForTesting
+  static Region getS3RegionFromEndpoint(final String endpoint,
   final boolean endpointEndsWithCentral) {
 
 if (!endpointEndsWithCentral) {
+  // S3 VPC endpoint parsing
+  Matcher matcher = VPC_ENDPOINT_PATTERN.matcher(endpoint);
+  if (matcher.find()) {
+LOG.debug("Mapping to VPCE");
+LOG.debug("Endpoint {} is vpc endpoint; parsing region as {}", 
endpoint, matcher.group(1));
+return Region.of(matcher.group(1));
+  }
+
   LOG.debug("Endpoint {} is not the default; parsing", endpoint);
   return AwsHostNameUtils.parseSigningRegion(endpoint, 
S3_SERVICE_NAME).orElse(null);
 }
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
index 8403b6bd6cb..d06224df5b3 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
@@ -97,6 +97,8 @@ public class ITestS3AEndpointRegion extends 
AbstractS3ATestBase {
 
   private static final String VPC_ENDPOINT = 
"vpce-1a2b3c4d-5e6f.s3.us-west-2.vpce.amazonaws.com";
 
+  private static final String CN_VPC_ENDPOINT = 
"vpce-1a2b3c4d-5e6f.s3.cn-northwest-1.vpce.amazonaws.com.cn";
+
   public static final String EXCEPTION_THROWN_BY_INTERCEPTOR = "Exception 
thrown by interceptor";
 
   /**
@@ -294,7 +296,6 @@ public class ITestS3AEndpointRegion extends 
AbstractS3ATestBase {
   }
 
   @Test
-  @Ignore("Pending HADOOP-18938. S3A region logic to handle vpce and non 
standard endpoints")
   public void testWithVPCE() throws Throwable {
 describe("Test with vpc endpoint");
 Configuration conf = getConfiguration();
@@ -304,6 +305,16 @@ public class ITestS3AEndpointRegion extends 
AbstractS3ATestBase {
 expectInterceptorException(client);
   }
 
+  @Test
+  public void testWithChinaVPCE() throws Throwable {
+describe("Test with china vpc endpoint");
+Configuration conf = getConfiguration();
+
+S3Client client = createS3Client(conf, CN_VPC_ENDPOINT, null, 
CN_NORTHWEST_1, false);
+
+expectInterceptorException(client);
+  }
+
   @Test
   public void testCentralEndpointAndDifferentRegionThanBucket() throws 
Throwable {
 describe("Access public bucket using central endpoint and region "
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AEnd

(hadoop) branch branch-3.4 updated: HADOOP-18938. S3A: Fix endpoint region parsing for vpc endpoints. (#6466)

2024-09-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 343bf5ffdb8 HADOOP-18938. S3A: Fix endpoint region parsing for vpc 
endpoints. (#6466)
343bf5ffdb8 is described below

commit 343bf5ffdb827f070ab9e4535571a83f0a540e83
Author: Shintaro Onuma <31045635+shintaroon...@users.noreply.github.com>
AuthorDate: Thu Sep 5 14:14:04 2024 +0100

HADOOP-18938. S3A: Fix endpoint region parsing for vpc endpoints. (#6466)

Contributed by Shintaro Onuma
---
 .../hadoop/fs/s3a/DefaultS3ClientFactory.java  | 16 +++-
 .../hadoop/fs/s3a/ITestS3AEndpointRegion.java  | 13 ++-
 .../hadoop/fs/s3a/TestS3AEndpointParsing.java  | 43 ++
 3 files changed, 70 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
index 7f6978e8e92..4b3db999247 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
@@ -21,6 +21,8 @@ package org.apache.hadoop.fs.s3a;
 import java.io.IOException;
 import java.net.URI;
 import java.net.URISyntaxException;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
 
 import org.apache.hadoop.classification.VisibleForTesting;
 import org.apache.hadoop.fs.s3a.impl.AWSClientConfig;
@@ -82,6 +84,9 @@ public class DefaultS3ClientFactory extends Configured
 
   private static final String S3_SERVICE_NAME = "s3";
 
+  private static final Pattern VPC_ENDPOINT_PATTERN =
+  
Pattern.compile("^(?:.+\\.)?([a-z0-9-]+)\\.vpce\\.amazonaws\\.(?:com|com\\.cn)$");
+
   /**
* Subclasses refer to this.
*/
@@ -380,10 +385,19 @@ public class DefaultS3ClientFactory extends Configured
* @param endpointEndsWithCentral true if the endpoint is configured as 
central.
* @return the S3 region, null if unable to resolve from endpoint.
*/
-  private static Region getS3RegionFromEndpoint(final String endpoint,
+  @VisibleForTesting
+  static Region getS3RegionFromEndpoint(final String endpoint,
   final boolean endpointEndsWithCentral) {
 
 if (!endpointEndsWithCentral) {
+  // S3 VPC endpoint parsing
+  Matcher matcher = VPC_ENDPOINT_PATTERN.matcher(endpoint);
+  if (matcher.find()) {
+LOG.debug("Mapping to VPCE");
+LOG.debug("Endpoint {} is vpc endpoint; parsing region as {}", 
endpoint, matcher.group(1));
+return Region.of(matcher.group(1));
+  }
+
   LOG.debug("Endpoint {} is not the default; parsing", endpoint);
   return AwsHostNameUtils.parseSigningRegion(endpoint, 
S3_SERVICE_NAME).orElse(null);
 }
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
index 8403b6bd6cb..d06224df5b3 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
@@ -97,6 +97,8 @@ public class ITestS3AEndpointRegion extends 
AbstractS3ATestBase {
 
   private static final String VPC_ENDPOINT = 
"vpce-1a2b3c4d-5e6f.s3.us-west-2.vpce.amazonaws.com";
 
+  private static final String CN_VPC_ENDPOINT = 
"vpce-1a2b3c4d-5e6f.s3.cn-northwest-1.vpce.amazonaws.com.cn";
+
   public static final String EXCEPTION_THROWN_BY_INTERCEPTOR = "Exception 
thrown by interceptor";
 
   /**
@@ -294,7 +296,6 @@ public class ITestS3AEndpointRegion extends 
AbstractS3ATestBase {
   }
 
   @Test
-  @Ignore("Pending HADOOP-18938. S3A region logic to handle vpce and non 
standard endpoints")
   public void testWithVPCE() throws Throwable {
 describe("Test with vpc endpoint");
 Configuration conf = getConfiguration();
@@ -304,6 +305,16 @@ public class ITestS3AEndpointRegion extends 
AbstractS3ATestBase {
 expectInterceptorException(client);
   }
 
+  @Test
+  public void testWithChinaVPCE() throws Throwable {
+describe("Test with china vpc endpoint");
+Configuration conf = getConfiguration();
+
+S3Client client = createS3Client(conf, CN_VPC_ENDPOINT, null, 
CN_NORTHWEST_1, false);
+
+expectInterceptorException(client);
+  }
+
   @Test
   public void testCentralEndpointAndDifferentRegionThanBucket() throws 
Throwable {
 describe("Access public bucket using central endpoint and region "
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AEndpointPa

(hadoop) branch trunk updated: Revert "YARN-11664. Remove HDFS Binaries/Jars Dependency From Yarn (#6631)"

2024-09-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 57e62ae07f1 Revert "YARN-11664. Remove HDFS Binaries/Jars Dependency 
From Yarn (#6631)"
57e62ae07f1 is described below

commit 57e62ae07f1c4eb8adccd9c61fc909080ca76c53
Author: Steve Loughran 
AuthorDate: Thu Sep 5 14:35:50 2024 +0100

Revert "YARN-11664. Remove HDFS Binaries/Jars Dependency From Yarn (#6631)"

This reverts commit 6c01490f14b65f43196e1f235c51749a712e7338.
---
 .../org/apache/hadoop/fs/HdfsCommonConstants.java  | 47 --
 .../hdfs/protocol/datatransfer/package-info.java   | 25 
 .../hdfs/protocol/datatransfer/IOStreamPair.java   |  6 +--
 .../delegation/DelegationTokenIdentifier.java  |  9 +
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  7 +---
 .../apache/hadoop/yarn/service/ServiceMaster.java  |  4 +-
 .../hadoop/yarn/service/client/ServiceClient.java  |  6 +--
 .../hadoop/yarn/service/client/TestServiceCLI.java |  4 +-
 .../yarn/logaggregation/AggregatedLogFormat.java   |  6 +--
 .../tfile/LogAggregationTFileController.java   |  4 +-
 10 files changed, 18 insertions(+), 100 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HdfsCommonConstants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HdfsCommonConstants.java
deleted file mode 100644
index f6c3ca4517d..000
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HdfsCommonConstants.java
+++ /dev/null
@@ -1,47 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs;
-
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.hadoop.io.Text;
-
-/**
- * This class contains constants for configuration keys and default values.
- */
-@InterfaceAudience.LimitedPrivate({"YARN", "HDFS"})
-@InterfaceStability.Evolving
-public final class HdfsCommonConstants {
-
-  /**
-   * Represents the kind of delegation token used for HDFS.
-   * This is a constant string value "HDFS_DELEGATION_TOKEN".
-   */
-  public static final Text HDFS_DELEGATION_KIND =
-  new Text("HDFS_DELEGATION_TOKEN");
-
-  /**
-   * DFS_ADMIN configuration: {@value}.
-   */
-  public static final String DFS_ADMIN = "dfs.cluster.administrators";
-
-  private HdfsCommonConstants() {
-  }
-
-}
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/package-info.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/package-info.java
deleted file mode 100644
index d2b8638b96e..000
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/package-info.java
+++ /dev/null
@@ -1,25 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-/**
- * This package provides access to following class.
- * {@link org.apache.hadoop.hdfs.protocol.datatransfer.IOStreamPair} class.
- */
-@InterfaceAudience.Private
-package org.apache.hadoop.hdfs.protocol.datatransfer;
-
-import org.apache.h

(hadoop) branch trunk updated: HADOOP-18938. S3A: Fix endpoint region parsing for vpc endpoints. (#6466)

2024-09-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1f302e83fd9 HADOOP-18938. S3A: Fix endpoint region parsing for vpc 
endpoints. (#6466)
1f302e83fd9 is described below

commit 1f302e83fd93366544ccbe2bc5ee2de305e65cb6
Author: Shintaro Onuma <31045635+shintaroon...@users.noreply.github.com>
AuthorDate: Thu Sep 5 14:14:04 2024 +0100

HADOOP-18938. S3A: Fix endpoint region parsing for vpc endpoints. (#6466)


Contributed by Shintaro Onuma
---
 .../hadoop/fs/s3a/DefaultS3ClientFactory.java  | 16 +++-
 .../hadoop/fs/s3a/ITestS3AEndpointRegion.java  | 13 ++-
 .../hadoop/fs/s3a/TestS3AEndpointParsing.java  | 43 ++
 3 files changed, 70 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
index ba9fc080c2c..c52454ac15c 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
@@ -21,6 +21,8 @@ package org.apache.hadoop.fs.s3a;
 import java.io.IOException;
 import java.net.URI;
 import java.net.URISyntaxException;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
 
 import org.apache.hadoop.classification.VisibleForTesting;
 import org.apache.hadoop.fs.s3a.impl.AWSClientConfig;
@@ -85,6 +87,9 @@ public class DefaultS3ClientFactory extends Configured
 
   private static final String S3_SERVICE_NAME = "s3";
 
+  private static final Pattern VPC_ENDPOINT_PATTERN =
+  
Pattern.compile("^(?:.+\\.)?([a-z0-9-]+)\\.vpce\\.amazonaws\\.(?:com|com\\.cn)$");
+
   /**
* Subclasses refer to this.
*/
@@ -390,10 +395,19 @@ public class DefaultS3ClientFactory extends Configured
* @param endpointEndsWithCentral true if the endpoint is configured as 
central.
* @return the S3 region, null if unable to resolve from endpoint.
*/
-  private static Region getS3RegionFromEndpoint(final String endpoint,
+  @VisibleForTesting
+  static Region getS3RegionFromEndpoint(final String endpoint,
   final boolean endpointEndsWithCentral) {
 
 if (!endpointEndsWithCentral) {
+  // S3 VPC endpoint parsing
+  Matcher matcher = VPC_ENDPOINT_PATTERN.matcher(endpoint);
+  if (matcher.find()) {
+LOG.debug("Mapping to VPCE");
+LOG.debug("Endpoint {} is vpc endpoint; parsing region as {}", 
endpoint, matcher.group(1));
+return Region.of(matcher.group(1));
+  }
+
   LOG.debug("Endpoint {} is not the default; parsing", endpoint);
   return AwsHostNameUtils.parseSigningRegion(endpoint, 
S3_SERVICE_NAME).orElse(null);
 }
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
index 8403b6bd6cb..d06224df5b3 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java
@@ -97,6 +97,8 @@ public class ITestS3AEndpointRegion extends 
AbstractS3ATestBase {
 
   private static final String VPC_ENDPOINT = 
"vpce-1a2b3c4d-5e6f.s3.us-west-2.vpce.amazonaws.com";
 
+  private static final String CN_VPC_ENDPOINT = 
"vpce-1a2b3c4d-5e6f.s3.cn-northwest-1.vpce.amazonaws.com.cn";
+
   public static final String EXCEPTION_THROWN_BY_INTERCEPTOR = "Exception 
thrown by interceptor";
 
   /**
@@ -294,7 +296,6 @@ public class ITestS3AEndpointRegion extends 
AbstractS3ATestBase {
   }
 
   @Test
-  @Ignore("Pending HADOOP-18938. S3A region logic to handle vpce and non 
standard endpoints")
   public void testWithVPCE() throws Throwable {
 describe("Test with vpc endpoint");
 Configuration conf = getConfiguration();
@@ -304,6 +305,16 @@ public class ITestS3AEndpointRegion extends 
AbstractS3ATestBase {
 expectInterceptorException(client);
   }
 
+  @Test
+  public void testWithChinaVPCE() throws Throwable {
+describe("Test with china vpc endpoint");
+Configuration conf = getConfiguration();
+
+S3Client client = createS3Client(conf, CN_VPC_ENDPOINT, null, 
CN_NORTHWEST_1, false);
+
+expectInterceptorException(client);
+  }
+
   @Test
   public void testCentralEndpointAndDifferentRegionThanBucket() throws 
Throwable {
 describe("Access public bucket using central endpoint and region "
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AEndpointParsing.

(hadoop) branch trunk updated (3bbfb2be089 -> 94868446104)

2024-09-04 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 3bbfb2be089 HADOOP-19257. S3A: 
ITestAssumeRole.testAssumeRoleBadInnerAuth failure (#7021)
 add 94868446104 HADOOP-16928. Make javadoc work on Java 17 (#6976)

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/conf/Configuration.java | 46 ++--
 .../org/apache/hadoop/fs/AbstractFileSystem.java   |  2 +-
 .../org/apache/hadoop/fs/ChecksumFileSystem.java   |  2 +-
 .../main/java/org/apache/hadoop/fs/ChecksumFs.java |  2 +-
 .../java/org/apache/hadoop/fs/FileContext.java | 32 -
 .../main/java/org/apache/hadoop/fs/FileSystem.java | 28 
 .../java/org/apache/hadoop/fs/RemoteIterator.java  |  4 +-
 .../java/org/apache/hadoop/io/EnumSetWritable.java | 18 ++---
 .../java/org/apache/hadoop/io/ObjectWritable.java  |  4 +-
 .../java/org/apache/hadoop/io/SequenceFile.java| 10 +--
 .../io/compress/bzip2/CBZip2InputStream.java   |  8 +--
 .../io/compress/bzip2/CBZip2OutputStream.java  | 82 +++---
 .../io/compress/zlib/BuiltInZlibDeflater.java  |  2 +-
 .../org/apache/hadoop/io/file/tfile/Chunk.java |  4 +-
 .../org/apache/hadoop/ipc/RpcClientException.java  |  2 +-
 .../java/org/apache/hadoop/ipc/RpcException.java   |  2 +-
 .../org/apache/hadoop/ipc/RpcServerException.java  |  2 +-
 .../hadoop/ipc/UnexpectedServerException.java  |  2 +-
 .../org/apache/hadoop/metrics2/package-info.java   | 26 +++
 .../main/java/org/apache/hadoop/net/NetUtils.java  |  6 +-
 .../hadoop/security/AccessControlException.java|  6 +-
 .../security/authorize/AuthorizationException.java |  6 +-
 .../apache/hadoop/util/GenericOptionsParser.java   |  2 +-
 .../apache/hadoop/util/InstrumentedReadLock.java   |  2 +-
 .../hadoop/util/InstrumentedReadWriteLock.java |  2 +-
 .../apache/hadoop/util/InstrumentedWriteLock.java  |  2 +-
 .../apache/hadoop/util/ShutdownThreadsHelper.java  | 16 ++---
 .../java/org/apache/hadoop/util/StringUtils.java   |  2 +-
 .../org/apache/hadoop/ipc/MiniRPCBenchmark.java| 12 ++--
 .../hdfs/client/impl/BlockReaderLocalLegacy.java   |  2 +-
 .../server/blockmanagement/DatanodeDescriptor.java |  2 +-
 .../server/namenode/EncryptionZoneManager.java |  2 +-
 .../hadoop/hdfs/server/namenode/NameNode.java  |  2 +-
 .../hdfs/server/namenode/snapshot/DiffList.java|  2 +-
 .../FileDistributionCalculator.java| 18 ++---
 .../FileDistributionVisitor.java   | 16 ++---
 .../java/org/apache/hadoop/hdfs/TestSafeMode.java  |  2 +-
 .../server/datanode/TestReadOnlySharedStorage.java |  6 +-
 .../v2/app/rm/preemption/AMPreemptionPolicy.java   |  2 +-
 .../org/apache/hadoop/mapred/FileOutputFormat.java | 16 ++---
 .../java/org/apache/hadoop/mapred/JobConf.java |  4 +-
 .../java/org/apache/hadoop/mapred/MapRunnable.java |  2 +-
 .../org/apache/hadoop/mapred/jobcontrol/Job.java   |  2 +-
 .../hadoop/mapred/join/CompositeInputFormat.java   | 12 ++--
 .../hadoop/mapred/join/CompositeRecordReader.java  |  4 +-
 .../hadoop/mapred/join/OverrideRecordReader.java   |  2 +-
 .../java/org/apache/hadoop/mapred/join/Parser.java |  2 +-
 .../hadoop/mapred/lib/TotalOrderPartitioner.java   |  2 +-
 .../mapreduce/lib/jobcontrol/ControlledJob.java|  2 +-
 .../mapreduce/lib/join/CompositeInputFormat.java   | 12 ++--
 .../mapreduce/lib/join/CompositeRecordReader.java  |  4 +-
 .../mapreduce/lib/join/OverrideRecordReader.java   |  2 +-
 .../apache/hadoop/mapreduce/lib/join/Parser.java   |  2 +-
 .../hadoop/mapreduce/lib/join/TupleWritable.java   |  2 +-
 .../mapreduce/lib/output/FileOutputFormat.java |  8 +--
 .../lib/partition/TotalOrderPartitioner.java   | 10 +--
 .../org/apache/hadoop/fs/AccumulatingReducer.java  |  8 +--
 .../java/org/apache/hadoop/fs/IOMapperBase.java|  4 +-
 .../java/org/apache/hadoop/fs/JHLogAnalyzer.java   | 42 +--
 .../org/apache/hadoop/examples/pi/package.html | 71 ++-
 hadoop-project/pom.xml | 23 --
 hadoop-tools/hadoop-aws/pom.xml|  1 -
 .../org/apache/hadoop/mapred/gridmix/FilePool.java |  2 +-
 .../hadoop/streaming/io/IdentifierResolver.java|  2 +-
 .../java/org/apache/hadoop/streaming/package.html  |  2 +-
 .../java/org/apache/hadoop/typedbytes/package.html |  8 ++-
 .../protocolrecords/SignalContainerRequest.java|  2 +-
 .../timelineservice/ServiceMetricsSink.java|  2 +-
 .../hadoop/yarn/security/AdminACLsManager.java |  4 +-
 .../apache/hadoop/yarn/util/BoundedAppender.java   |  2 +-
 .../hadoop/yarn/server/utils/LeveldbIterator.java  |  2 +-
 .../timelineservice/storage/common/BaseTable.java  |  2 +-
 72 files changed, 317 insertions(+), 336 deletions(-)


-
To unsubscribe, e-mail: common-commits

(hadoop) branch branch-3.4.1 updated: HADOOP-19257. S3A: ITestAssumeRole.testAssumeRoleBadInnerAuth failure (#7021)

2024-09-03 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4.1 by this push:
 new 98ad16f1c16 HADOOP-19257. S3A: 
ITestAssumeRole.testAssumeRoleBadInnerAuth failure (#7021)
98ad16f1c16 is described below

commit 98ad16f1c16ea7cdd8b750bdfae83c3bac3da1e3
Author: Steve Loughran 
AuthorDate: Tue Sep 3 21:20:47 2024 +0100

HADOOP-19257. S3A: ITestAssumeRole.testAssumeRoleBadInnerAuth failure 
(#7021)

Remove the error string matched on so that no future message change
from AWS will trigger a regression

Contributed by Steve Loughran
---
 .../src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java   | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
index 5aa72e69490..592529b553d 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
@@ -283,8 +283,7 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
 conf.set(SECRET_KEY, "not secret");
 expectFileSystemCreateFailure(conf,
 AWSBadRequestException.class,
-"not a valid " +
-"key=value pair (missing equal-sign) in Authorization header");
+"");
   }
 
   @Test


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19257. S3A: ITestAssumeRole.testAssumeRoleBadInnerAuth failure (#7021)

2024-09-03 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new edfd10e0be7 HADOOP-19257. S3A: 
ITestAssumeRole.testAssumeRoleBadInnerAuth failure (#7021)
edfd10e0be7 is described below

commit edfd10e0be76f9daa4b8966bae64a37bf7e49cec
Author: Steve Loughran 
AuthorDate: Tue Sep 3 21:20:47 2024 +0100

HADOOP-19257. S3A: ITestAssumeRole.testAssumeRoleBadInnerAuth failure 
(#7021)

Remove the error string matched on so that no future message change
from AWS will trigger a regression

Contributed by Steve Loughran
---
 .../src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java   | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
index 5aa72e69490..592529b553d 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
@@ -283,8 +283,7 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
 conf.set(SECRET_KEY, "not secret");
 expectFileSystemCreateFailure(conf,
 AWSBadRequestException.class,
-"not a valid " +
-"key=value pair (missing equal-sign) in Authorization header");
+"");
   }
 
   @Test


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated (1655acc5e2d -> 3bbfb2be089)

2024-09-03 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 1655acc5e2d HADOOP-19250. [Addendum] Fix test 
TestServiceInterruptHandling.testRegisterAndRaise. (#7008)
 add 3bbfb2be089 HADOOP-19257. S3A: 
ITestAssumeRole.testAssumeRoleBadInnerAuth failure (#7021)

No new revisions were added by this update.

Summary of changes:
 .../src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java   | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop-thirdparty) branch trunk updated: HADOOP-19252. Release hadoop-thirdparty 1.3.0: version update on trunk

2024-09-02 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop-thirdparty.git


The following commit(s) were added to refs/heads/trunk by this push:
 new faa8afc  HADOOP-19252. Release hadoop-thirdparty 1.3.0: version update 
on trunk
faa8afc is described below

commit faa8afc43ac467335ff111ec16ce76f6d30eea25
Author: Steve Loughran 
AuthorDate: Fri Aug 16 17:02:02 2024 +0100

HADOOP-19252. Release hadoop-thirdparty 1.3.0: version update on trunk
---
 hadoop-shaded-avro_1_11/pom.xml | 2 +-
 hadoop-shaded-guava/pom.xml | 2 +-
 hadoop-shaded-protobuf_3_25/pom.xml | 2 +-
 pom.xml | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/hadoop-shaded-avro_1_11/pom.xml b/hadoop-shaded-avro_1_11/pom.xml
index 12e4dab..5a07faf 100644
--- a/hadoop-shaded-avro_1_11/pom.xml
+++ b/hadoop-shaded-avro_1_11/pom.xml
@@ -23,7 +23,7 @@
 
 hadoop-thirdparty
 org.apache.hadoop.thirdparty
-1.3.0-SNAPSHOT
+1.4.0-SNAPSHOT
 ..
 
 4.0.0
diff --git a/hadoop-shaded-guava/pom.xml b/hadoop-shaded-guava/pom.xml
index fcfbb60..b03646f 100644
--- a/hadoop-shaded-guava/pom.xml
+++ b/hadoop-shaded-guava/pom.xml
@@ -23,7 +23,7 @@
 
 hadoop-thirdparty
 org.apache.hadoop.thirdparty
-1.3.0-SNAPSHOT
+1.4.0-SNAPSHOT
 ../pom.xml
 
 4.0.0
diff --git a/hadoop-shaded-protobuf_3_25/pom.xml 
b/hadoop-shaded-protobuf_3_25/pom.xml
index 3e9a98d..9a90d5a 100644
--- a/hadoop-shaded-protobuf_3_25/pom.xml
+++ b/hadoop-shaded-protobuf_3_25/pom.xml
@@ -23,7 +23,7 @@
   
 hadoop-thirdparty
 org.apache.hadoop.thirdparty
-1.3.0-SNAPSHOT
+1.4.0-SNAPSHOT
 ../pom.xml
   
   4.0.0
diff --git a/pom.xml b/pom.xml
index e98cf77..98879d8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -23,7 +23,7 @@
   4.0.0
   org.apache.hadoop.thirdparty
   hadoop-thirdparty
-  1.3.0-SNAPSHOT
+  1.4.0-SNAPSHOT
   
 org.apache
 apache


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19250. [Addendum] Fix test TestServiceInterruptHandling.testRegisterAndRaise. (#7008)

2024-08-30 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1655acc5e2d HADOOP-19250. [Addendum] Fix test 
TestServiceInterruptHandling.testRegisterAndRaise. (#7008)
1655acc5e2d is described below

commit 1655acc5e2d5fe27e01f46ea02bd5a7dea44fe12
Author: zhengchenyu 
AuthorDate: Fri Aug 30 19:05:13 2024 +0800

HADOOP-19250. [Addendum] Fix test 
TestServiceInterruptHandling.testRegisterAndRaise. (#7008)


Contributed by Chenyu Zheng
---
 .../apache/hadoop/service/launcher/TestServiceInterruptHandling.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/service/launcher/TestServiceInterruptHandling.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/service/launcher/TestServiceInterruptHandling.java
index c21fa8b7307..8181e07fae0 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/service/launcher/TestServiceInterruptHandling.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/service/launcher/TestServiceInterruptHandling.java
@@ -38,7 +38,7 @@ public class TestServiceInterruptHandling
   @Test
   public void testRegisterAndRaise() throws Throwable {
 InterruptCatcher catcher = new InterruptCatcher();
-String name = IrqHandler.CONTROL_C;
+String name = "USR2";
 IrqHandler irqHandler = new IrqHandler(name, catcher);
 irqHandler.bind();
 assertEquals(0, irqHandler.getSignalCount());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19252. Upgrade hadoop-thirdparty to 1.3.0 (#7007)

2024-08-30 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new b404c8c8f80 HADOOP-19252. Upgrade hadoop-thirdparty to 1.3.0 (#7007)
b404c8c8f80 is described below

commit b404c8c8f80d015edf48c674463ed57a9af6c55c
Author: Steve Loughran 
AuthorDate: Fri Aug 30 11:50:51 2024 +0100

HADOOP-19252. Upgrade hadoop-thirdparty to 1.3.0 (#7007)


Update the version of hadoop-thirdparty to 1.3.0
across all shaded artifacts used.

This synchronizes the shaded protobuf library with those of
all other shaded artifacts (guava, avro)

Contributed by Steve Loughran
---
 LICENSE-binary | 15 ---
 hadoop-project/pom.xml |  4 ++--
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index cc018ed265b..a716db70f72 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -233,19 +233,19 @@ com.google:guice:5.1.0
 com.google:guice-servlet:5.1.0
 com.google.api.grpc:proto-google-common-protos:1.0.0
 com.google.code.gson:2.9.0
-com.google.errorprone:error_prone_annotations:2.2.0
-com.google.j2objc:j2objc-annotations:1.1
+com.google.errorprone:error_prone_annotations:2.5.1
+com.google.j2objc:j2objc-annotations:1.3
 com.google.json-simple:json-simple:1.1.1
 com.google.guava:failureaccess:1.0
 com.google.guava:guava:20.0
-com.google.guava:guava:27.0-jre
+com.google.guava:guava:32.0.1-jre
 com.google.guava:listenablefuture:.0-empty-to-avoid-conflict-with-guava
 com.microsoft.azure:azure-storage:7.0.0
 com.nimbusds:nimbus-jose-jwt:9.37.2
 com.zaxxer:HikariCP:4.0.3
 commons-beanutils:commons-beanutils:1.9.4
 commons-cli:commons-cli:1.5.0
-commons-codec:commons-codec:1.11
+commons-codec:commons-codec:1.15
 commons-collections:commons-collections:3.2.2
 commons-daemon:commons-daemon:1.0.13
 commons-io:commons-io:2.16.1
@@ -298,6 +298,7 @@ javax.inject:javax.inject:1
 net.java.dev.jna:jna:5.2.0
 net.minidev:accessors-smart:1.2
 org.apache.avro:avro:1.9.2
+org.apache.avro:avro:1.11.3
 org.apache.commons:commons-collections4:4.2
 org.apache.commons:commons-compress:1.26.1
 org.apache.commons:commons-configuration2:2.10.1
@@ -362,7 +363,7 @@ org.objenesis:objenesis:2.6
 org.xerial.snappy:snappy-java:1.1.10.4
 org.yaml:snakeyaml:2.0
 org.wildfly.openssl:wildfly-openssl:1.1.3.Final
-software.amazon.awssdk:bundle:jar:2.25.53
+software.amazon.awssdk:bundle:2.25.53
 
 
 

@@ -395,7 +396,7 @@ 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/d3-3.5.17.min.js
 leveldb v1.13
 
 com.google.protobuf:protobuf-java:2.5.0
-com.google.protobuf:protobuf-java:3.21.12
+com.google.protobuf:protobuf-java:3.25.3
 com.google.re2j:re2j:1.1
 com.jcraft:jsch:0.1.55
 com.thoughtworks.paranamer:paranamer:2.3
@@ -485,7 +486,7 @@ com.microsoft.sqlserver:mssql-jdbc:6.2.1.jre7
 org.bouncycastle:bcpkix-jdk18on:1.78.1
 org.bouncycastle:bcprov-jdk18on:1.78.1
 org.bouncycastle:bcutil-jdk18on:1.78.1
-org.checkerframework:checker-qual:2.5.2
+org.checkerframework:checker-qual:3.8.0
 org.codehaus.mojo:animal-sniffer-annotations:1.21
 org.jruby.jcodings:jcodings:1.0.13
 org.jruby.joni:joni:2.1.2
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 05dccb62985..33533dbbaed 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -96,8 +96,8 @@
 3.23.4
 ${env.HADOOP_PROTOC_PATH}
 
-1.2.0
-
1.3.0-SNAPSHOT
+1.3.0
+
${hadoop-thirdparty.version}
 
${hadoop-thirdparty.version}
 
org.apache.hadoop.thirdparty
 
${hadoop-thirdparty-shaded-prefix}.protobuf


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19248. Protobuf code generate and replace should happen together (#6975)

2024-08-30 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 6bee9db424a HADOOP-19248. Protobuf code generate and replace should 
happen together (#6975)
6bee9db424a is described below

commit 6bee9db424a804ac08847dd53001ab39f568229e
Author: Cheng Pan 
AuthorDate: Fri Aug 30 18:30:00 2024 +0800

HADOOP-19248. Protobuf code generate and replace should happen together 
(#6975)


Contributed by Cheng Pan
---
 hadoop-project/pom.xml | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index c4aa1a39018..e92e41ac99e 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -2287,7 +2287,7 @@
   
 
   replace-generated-sources
-  process-sources
+  generate-sources
   
 replace
   
@@ -2307,7 +2307,7 @@
 
 
   replace-generated-test-sources
-  process-test-resources
+  generate-test-resources
   
 replace
   
@@ -2327,7 +2327,7 @@
 
 
   replace-sources
-  process-sources
+  generate-sources
   
 replace
   
@@ -2347,7 +2347,7 @@
 
 
   replace-test-sources
-  process-test-sources
+  generate-test-sources
   
 replace
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19248. Protobuf code generate and replace should happen together (#6975)

2024-08-28 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0aab1a29764 HADOOP-19248. Protobuf code generate and replace should 
happen together (#6975)
0aab1a29764 is described below

commit 0aab1a297647688173a024b003e88e98d9ae92ad
Author: Cheng Pan 
AuthorDate: Thu Aug 29 03:18:46 2024 +0800

HADOOP-19248. Protobuf code generate and replace should happen together 
(#6975)


 Contributed by Cheng Pan
---
 hadoop-project/pom.xml | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 4c69012f08d..05dccb62985 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -2309,7 +2309,7 @@
   
 
   replace-generated-sources
-  process-sources
+  generate-sources
   
 replace
   
@@ -2329,7 +2329,7 @@
 
 
   replace-generated-test-sources
-  process-test-resources
+  generate-test-resources
   
 replace
   
@@ -2349,7 +2349,7 @@
 
 
   replace-sources
-  process-sources
+  generate-sources
   
 replace
   
@@ -2369,7 +2369,7 @@
 
 
   replace-test-sources
-  process-test-sources
+  generate-test-sources
   
 replace
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop-thirdparty) annotated tag rel/release-1.3.0 updated (0fd6290 -> 1c615e7)

2024-08-28 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to annotated tag rel/release-1.3.0
in repository https://gitbox.apache.org/repos/asf/hadoop-thirdparty.git


*** WARNING: tag rel/release-1.3.0 was modified! ***

from 0fd6290  (commit)
  to 1c615e7  (tag)
 tagging 0fd62903b071b5186f31b7030ce42e1c00f6bb6a (commit)
  by Steve Loughran
  on Wed Aug 28 18:57:18 2024 +0100

- Log -
HADOOP-19252. Hadoop Thirdparty 1.3.0 release
-BEGIN PGP SIGNATURE-

iQIzBAABCgAdFiEEOCN+5CUFAoUHfbV60iz4RtuxYqAFAmbPZP4ACgkQ0iz4Rtux
YqCX+xAAqTXl1QM6se/CUFgfjko4yW8DIDfpnUMCqMaU9nl8esfn0QL0gXtgaFFK
rjTDfQf1wRXj+A3bb30Fv8ByXL0pIPJcxA/YiLdFQrn7JwxwkKfbq1tMNOXoroiQ
/6gLsxQk9xhYvIWwugPkG+Y4hsNt+oa/6NZo7+gphXdni+WZU8huKGXvaw+QsvGz
DOPVfehRYH9YLFWRQdlH0k66n5h0BYLjv5zxj5UkX279wHbMvmj59TgjqrBv2VdR
+sKnI0ohpFZHa9MxmEVVBbGaQvUiNRx3vNJSlS90KvTDB6tr6Aj2y/ruUT/KWZiy
nNzi0PVY4NBzqBPAv55FIslw9Vk+Vz3hCuOYftKH20n4tEuRahZRuGUUw/S4nAY9
Pvhl8JclVXtS1QTWgdBKSBOSSSBm5OLQuTNoKfsxdh+Wb7GW3JonEUdFkGNX7lU3
Dlds3+kmjHOFdhu0K7zB6sOWPClfn/yum8rWof1BkwRrYSXXIxNHhPN29ul25oIb
5i73F6ReKvb0aj70n0FL5N6pq4usdxrhi/iaEM44YtRZ83qXQ5Pr6QpRi0p21BiB
S9Zlc44WPbBYtSwLumqxWi0Ru529bgGHQpa8mvXBZO3TRI8kEt/8Zn+Uq71T6kHZ
9gsoUHwg/WVNrnKYz7lJBYVmCdpj9OmKpjes1r3KU2qkfjTzrtw=
=9Pxi
-END PGP SIGNATURE-
---


No new revisions were added by this update.

Summary of changes:


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



svn commit: r71135 - /dev/hadoop/thirdparty-1.3.0-RC1/ /release/hadoop/thirdparty/thirdparty-1.3.0/

2024-08-28 Thread stevel
Author: stevel
Date: Wed Aug 28 17:38:59 2024
New Revision: 71135

Log:
HADOOP-19252. Releasing Hadoop Thirdparty 1.3.0

Added:
release/hadoop/thirdparty/thirdparty-1.3.0/
  - copied from r71134, dev/hadoop/thirdparty-1.3.0-RC1/
Removed:
dev/hadoop/thirdparty-1.3.0-RC1/


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



svn commit: r71122 - in /dev/hadoop: hadoop-thirdparty-1.3.0-RC1/ thirdparty-1.3.0-RC1/

2024-08-27 Thread stevel
Author: stevel
Date: Tue Aug 27 17:43:01 2024
New Revision: 71122

Log:
HADOOP-19252. preparing thirdparty release

Added:
dev/hadoop/thirdparty-1.3.0-RC1/
  - copied from r71121, dev/hadoop/hadoop-thirdparty-1.3.0-RC1/
Removed:
dev/hadoop/hadoop-thirdparty-1.3.0-RC1/


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



svn commit: r71121 - in /dev/hadoop: hadoop-3.3.6/ hadoop-3.3.7-aws/ hadoop-3.4.0-RC3/ hadoop-thirdparty-1.2.0-RC0/ hadoop-thirdparty-1.2.0-RC1/

2024-08-27 Thread stevel
Author: stevel
Date: Tue Aug 27 17:28:39 2024
New Revision: 71121

Log:
remove all obsolete RCs

Removed:
dev/hadoop/hadoop-3.3.6/
dev/hadoop/hadoop-3.3.7-aws/
dev/hadoop/hadoop-3.4.0-RC3/
dev/hadoop/hadoop-thirdparty-1.2.0-RC0/
dev/hadoop/hadoop-thirdparty-1.2.0-RC1/


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop-release-support) branch main updated: HADOOP-19252. hadoop-thirdparty 1.3.0-RC1

2024-08-22 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/hadoop-release-support.git


The following commit(s) were added to refs/heads/main by this push:
 new ff7669e  HADOOP-19252. hadoop-thirdparty 1.3.0-RC1
ff7669e is described below

commit ff7669e03bb6e0b0bfdfc193a69784dfc2e8c2cd
Author: Steve Loughran 
AuthorDate: Tue Aug 20 19:03:36 2024 +0100

HADOOP-19252. hadoop-thirdparty 1.3.0-RC1

added initial targets here
---
 README.md  | 51 
 build.xml  | 90 --
 pom.xml|  9 ++-
 release.properties |  2 +-
 .../3p-release-1.3.0.properties}   | 26 +++
 src/releases/release-info-3.4.1.properties | 12 +--
 src/text/3p.vote.txt   | 19 +
 7 files changed, 165 insertions(+), 44 deletions(-)

diff --git a/README.md b/README.md
index 967bb7c..374b564 100644
--- a/README.md
+++ b/README.md
@@ -29,7 +29,7 @@ the classpath.
 
 Installed applications/platforms
 
-* Java 8+. Later releases are valid for validation too.
+* Java 8+. Later releases are valid for validation too (and required for some 
projects)
 * Apache Ant.
 * Apache maven
 * gpg
@@ -82,7 +82,7 @@ the classpath.
 This is an optional property file which contains all user-specific 
customizations
 and options to assist in the release process.
 
-This file is *not* SCM-managed.
+This file is *not* SCM-managed (it is explicitly ignored).
 
 It is read before all other property files are read/ant properties
 set, so can override any subsequent declarations.
@@ -97,7 +97,7 @@ path to the latest release being worked on in this branch.
 2. It is read after `build.properties`
 
 ```properties
-release.version=3.4.0
+release.version=3.4.1
 ```
 
 Ant uses this to to set the property `release.info.file` to the path
@@ -233,8 +233,9 @@ Update as a appropriate.
 
 ### Update `/release.properties`
 
-Update the value of `release.info.file` in `/release.properties` to
-point to the newly created file.
+Update the value of `release.version in `/release.properties` to
+declare the release version. This is used to determine the specific release 
properties
+file for that version.
 
 ```properties
 release.version=X.Y.Z
@@ -262,14 +263,13 @@ scp.hadoop.dir=hadoop
 staging.dir=/Users/stevel/hadoop/release/staging
 
 # where various modules live for build and test
-spark.dir=/Users/stevel/Projects/sparkwork/spark
-cloud-examples.dir=/Users/stevel/Projects/sparkwork/cloud-integration/cloud-examples
-cloud.test.configuration.file=/Users/stevel/Projects/config/cloud-test-configs/s3a.xml
-bigdata-interop.dir=/Users/stevel/Projects/gcs/bigdata-interop
-hboss.dir=/Users/stevel/Projects/hbasework/hbase-filesystem
-cloudstore.dir=/Users/stevel/Projects/cloudstore
-fs-api-shim.dir=/Users/stevel/Projects/Formats/fs-api-shim/
-
+spark.dir=/Users/stevel/dev/spark
+cloud-examples.dir=/Users/stevel/dev/sparkwork/cloud-integration/cloud-examples
+cloud.test.configuration.file=/Users/stevel/dev/config/test-configs/s3a.xml
+bigdata-interop.dir=/Users/stevel/dev/gcs/bigdata-interop
+hboss.dir=/Users/stevel/dev/hbasework/hbase-filesystem
+cloudstore.dir=/Users/stevel/dev/cloudstore
+fs-api-shim.dir=/Users/stevel/dev/Formats/fs-api-shim/
 ```
 
 ### Clean up first
@@ -561,7 +561,7 @@ pending release version
 ant mvn-purge
 ```
 
-## Execute the maven test.
+## Execute a maven test run
 
 Download the artifacts from maven staging repositories and compile/test a 
minimal application
 
@@ -923,6 +923,29 @@ ant stage-svn-rollback
 ant stage-svn-log
 ```
 
+# Releasing Hadoop Third party
+
+See wiki page [How To Release 
Hadoop-Thirdparty](https://cwiki.apache.org/confluence/display/HADOOP2/How+To+Release+Hadoop-Thirdparty)
+
+
+Support for this release workflow is pretty minimal, but releasing it is 
simpler
+
+* Update the branches and maven artifact versions
+* build/test. This can be done with the help of a PR to upgrade hadoop.
+* create the vote message.
+
+## Configuration options
+
+All options are prefixed `3p.`
+
+## Targets:
+
+All targets are prefixed `3p.`
+
+```
+3p.mvn-purge : remove all third party artifacts from the repo 
+```
+
 # Contributing to this module
 
 There are lots of opportunities to contribute to the module
diff --git a/build.xml b/build.xml
index 1c97a33..9b8780e 100644
--- a/build.xml
+++ b/build.xml
@@ -173,6 +173,12 @@
 
 
+
+
+
+
 
 
 
@@ -297,10 +303,6 @@
   
   
-
-
 
 
   deleting ${hadoop.artifacts}/**/${hadoop.version}/*
@@ -361,9 +363,14 @@
 
 
 
+
+
 
 
+
 
   
 
@@ -397,7 +404,7 @@
 
 
   
+description="copy the downloaded artifacts from incoming to release dir">
 
 
   
@@ -405,6 +412,7 @@
  

(hadoop) branch branch-3.4.1 updated: HADOOP-18542. Keep MSI tenant ID and client ID optional (#4262)

2024-08-21 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4.1 by this push:
 new 6e4cc8be043 HADOOP-18542. Keep MSI tenant ID and client ID optional 
(#4262)
6e4cc8be043 is described below

commit 6e4cc8be04348a25e92ce2be8c61519fda6bf764
Author: Carl Levasseur 
AuthorDate: Wed Aug 21 15:15:28 2024 +0200

HADOOP-18542. Keep MSI tenant ID and client ID optional (#4262)

Contributed by Carl Levasseur
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  4 +--
 .../fs/azurebfs/TestAccountConfiguration.java  | 33 +-
 2 files changed, 28 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 3f5e7b0e69a..43923f758f9 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -962,9 +962,9 @@ public class AbfsConfiguration{
   FS_AZURE_ACCOUNT_OAUTH_MSI_ENDPOINT,
   AuthConfigurations.DEFAULT_FS_AZURE_ACCOUNT_OAUTH_MSI_ENDPOINT);
   String tenantGuid =
-  getMandatoryPasswordString(FS_AZURE_ACCOUNT_OAUTH_MSI_TENANT);
+  getPasswordString(FS_AZURE_ACCOUNT_OAUTH_MSI_TENANT);
   String clientId =
-  getMandatoryPasswordString(FS_AZURE_ACCOUNT_OAUTH_CLIENT_ID);
+  getPasswordString(FS_AZURE_ACCOUNT_OAUTH_CLIENT_ID);
   String authority = getTrimmedPasswordString(
   FS_AZURE_ACCOUNT_OAUTH_MSI_AUTHORITY,
   AuthConfigurations.DEFAULT_FS_AZURE_ACCOUNT_OAUTH_MSI_AUTHORITY);
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
index 17da772d081..483a7e3d5d5 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
@@ -27,6 +27,7 @@ import org.apache.hadoop.conf.Configuration;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.ConfigurationPropertyNotFoundException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidConfigurationValueException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException;
+import org.apache.hadoop.fs.azurebfs.oauth2.AccessTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.CustomTokenProviderAdapter;
 import org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider;
@@ -66,6 +67,7 @@ import static org.junit.Assert.assertNull;
  */
 public class TestAccountConfiguration {
   private static final String TEST_OAUTH_PROVIDER_CLASS_CONFIG = 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider";
+  private static final String TEST_OAUTH_MSI_TOKEN_PROVIDER_CLASS_CONFIG = 
"org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider";
   private static final String TEST_CUSTOM_PROVIDER_CLASS_CONFIG = 
"org.apache.hadoop.fs.azurebfs.oauth2.RetryTestTokenProvider";
   private static final String TEST_SAS_PROVIDER_CLASS_CONFIG_1 = 
"org.apache.hadoop.fs.azurebfs.extensions.MockErrorSASTokenProvider";
   private static final String TEST_SAS_PROVIDER_CLASS_CONFIG_2 = 
"org.apache.hadoop.fs.azurebfs.extensions.MockSASTokenProvider";
@@ -90,11 +92,6 @@ public class TestAccountConfiguration {
   FS_AZURE_ACCOUNT_OAUTH_USER_NAME,
   FS_AZURE_ACCOUNT_OAUTH_USER_PASSWORD));
 
-  private static final List MSI_TOKEN_OAUTH_CONFIG_KEYS =
-  Collections.unmodifiableList(Arrays.asList(
-  FS_AZURE_ACCOUNT_OAUTH_MSI_TENANT,
-  FS_AZURE_ACCOUNT_OAUTH_CLIENT_ID));
-
   private static final List REFRESH_TOKEN_OAUTH_CONFIG_KEYS =
   Collections.unmodifiableList(Arrays.asList(
   FS_AZURE_ACCOUNT_OAUTH_REFRESH_TOKEN,
@@ -410,10 +407,8 @@ public class TestAccountConfiguration {
   public void testOAuthConfigPropNotFound() throws Throwable {
 testConfigPropNotFound(CLIENT_CREDENTIAL_OAUTH_CONFIG_KEYS, 
ClientCredsTokenProvider.class.getName());
 testConfigPropNotFound(USER_PASSWORD_OAUTH_CONFIG_KEYS, 
UserPasswordTokenProvider.class.getName());
-testConfigPropNotFound(MSI_TOKEN_OAUTH_CONFIG_KEYS, 
MsiTokenProvider.class.getName());
 testConfigPropNotFound(REFRESH_TOKEN_OAUTH_CONFIG_KEYS, 
RefreshTokenBasedTokenProvider.class.getName());
 testConf

(hadoop) 02/02: HADOOP-18542. Keep MSI tenant ID and client ID optional (#4262)

2024-08-21 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit e4d46d89e751f84bf09820cdeb5201af9967d3cf
Author: Carl Levasseur 
AuthorDate: Wed Aug 21 15:15:28 2024 +0200

HADOOP-18542. Keep MSI tenant ID and client ID optional (#4262)

Contributed by Carl Levasseur
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  4 +--
 .../fs/azurebfs/TestAccountConfiguration.java  | 33 +-
 2 files changed, 28 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 3f5e7b0e69a..43923f758f9 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -962,9 +962,9 @@ public class AbfsConfiguration{
   FS_AZURE_ACCOUNT_OAUTH_MSI_ENDPOINT,
   AuthConfigurations.DEFAULT_FS_AZURE_ACCOUNT_OAUTH_MSI_ENDPOINT);
   String tenantGuid =
-  getMandatoryPasswordString(FS_AZURE_ACCOUNT_OAUTH_MSI_TENANT);
+  getPasswordString(FS_AZURE_ACCOUNT_OAUTH_MSI_TENANT);
   String clientId =
-  getMandatoryPasswordString(FS_AZURE_ACCOUNT_OAUTH_CLIENT_ID);
+  getPasswordString(FS_AZURE_ACCOUNT_OAUTH_CLIENT_ID);
   String authority = getTrimmedPasswordString(
   FS_AZURE_ACCOUNT_OAUTH_MSI_AUTHORITY,
   AuthConfigurations.DEFAULT_FS_AZURE_ACCOUNT_OAUTH_MSI_AUTHORITY);
diff --git 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
index 17da772d081..483a7e3d5d5 100644
--- 
a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java
@@ -27,6 +27,7 @@ import org.apache.hadoop.conf.Configuration;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.ConfigurationPropertyNotFoundException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidConfigurationValueException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException;
+import org.apache.hadoop.fs.azurebfs.oauth2.AccessTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider;
 import org.apache.hadoop.fs.azurebfs.oauth2.CustomTokenProviderAdapter;
 import org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider;
@@ -66,6 +67,7 @@ import static org.junit.Assert.assertNull;
  */
 public class TestAccountConfiguration {
   private static final String TEST_OAUTH_PROVIDER_CLASS_CONFIG = 
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider";
+  private static final String TEST_OAUTH_MSI_TOKEN_PROVIDER_CLASS_CONFIG = 
"org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider";
   private static final String TEST_CUSTOM_PROVIDER_CLASS_CONFIG = 
"org.apache.hadoop.fs.azurebfs.oauth2.RetryTestTokenProvider";
   private static final String TEST_SAS_PROVIDER_CLASS_CONFIG_1 = 
"org.apache.hadoop.fs.azurebfs.extensions.MockErrorSASTokenProvider";
   private static final String TEST_SAS_PROVIDER_CLASS_CONFIG_2 = 
"org.apache.hadoop.fs.azurebfs.extensions.MockSASTokenProvider";
@@ -90,11 +92,6 @@ public class TestAccountConfiguration {
   FS_AZURE_ACCOUNT_OAUTH_USER_NAME,
   FS_AZURE_ACCOUNT_OAUTH_USER_PASSWORD));
 
-  private static final List MSI_TOKEN_OAUTH_CONFIG_KEYS =
-  Collections.unmodifiableList(Arrays.asList(
-  FS_AZURE_ACCOUNT_OAUTH_MSI_TENANT,
-  FS_AZURE_ACCOUNT_OAUTH_CLIENT_ID));
-
   private static final List REFRESH_TOKEN_OAUTH_CONFIG_KEYS =
   Collections.unmodifiableList(Arrays.asList(
   FS_AZURE_ACCOUNT_OAUTH_REFRESH_TOKEN,
@@ -410,10 +407,8 @@ public class TestAccountConfiguration {
   public void testOAuthConfigPropNotFound() throws Throwable {
 testConfigPropNotFound(CLIENT_CREDENTIAL_OAUTH_CONFIG_KEYS, 
ClientCredsTokenProvider.class.getName());
 testConfigPropNotFound(USER_PASSWORD_OAUTH_CONFIG_KEYS, 
UserPasswordTokenProvider.class.getName());
-testConfigPropNotFound(MSI_TOKEN_OAUTH_CONFIG_KEYS, 
MsiTokenProvider.class.getName());
 testConfigPropNotFound(REFRESH_TOKEN_OAUTH_CONFIG_KEYS, 
RefreshTokenBasedTokenProvider.class.getName());
 testConfigPropNotFound(WORKLOAD_IDENTITY_OAUTH_CONFIG_KEYS, 
WorkloadIdentityTokenProvider.class.getName());
-
   }
 
   private void testConfigPropNotFound(List configKeys,
@@ -444,6 +439,30 @@ public class TestAccountConfiguration {
 

(hadoop) 01/02: HADOOP-19249. KMSClientProvider raises NPE with unauthed user (#6984)

2024-08-21 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a455823e879804d839a6ee0e72a2779834729617
Author: dhavalshah9131 <35031652+dhavalshah9...@users.noreply.github.com>
AuthorDate: Tue Aug 20 18:33:05 2024 +0530

HADOOP-19249. KMSClientProvider raises NPE with unauthed user (#6984)

KMSClientProvider raises a NullPointerException when an unauthorised user
tries to perform the key operation

Contributed by Dhaval Shah
---
 .../org/apache/hadoop/crypto/key/kms/KMSClientProvider.java  | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index f0c912224f9..10f7b428ad1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.crypto.key.kms;
 
 import org.apache.commons.codec.binary.Base64;
+import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.crypto.key.KeyProvider;
@@ -561,17 +562,19 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   }
   throw ex;
 }
+
 if ((conn.getResponseCode() == HttpURLConnection.HTTP_FORBIDDEN
-&& (conn.getResponseMessage().equals(ANONYMOUS_REQUESTS_DISALLOWED) ||
-conn.getResponseMessage().contains(INVALID_SIGNATURE)))
+&& (!StringUtils.isEmpty(conn.getResponseMessage())
+&& (conn.getResponseMessage().equals(ANONYMOUS_REQUESTS_DISALLOWED)
+|| conn.getResponseMessage().contains(INVALID_SIGNATURE
 || conn.getResponseCode() == HttpURLConnection.HTTP_UNAUTHORIZED) {
   // Ideally, this should happen only when there is an Authentication
   // failure. Unfortunately, the AuthenticationFilter returns 403 when it
   // cannot authenticate (Since a 401 requires Server to send
   // WWW-Authenticate header as well)..
   if (LOG.isDebugEnabled()) {
-LOG.debug("Response={}({}), resetting authToken",
-conn.getResponseCode(), conn.getResponseMessage());
+LOG.debug("Response={}, resetting authToken",
+conn.getResponseCode());
   }
   KMSClientProvider.this.authToken =
   new DelegationTokenAuthenticatedURL.Token();
@@ -798,6 +801,7 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   @SuppressWarnings("rawtypes")
   @Override
   public KeyVersion decryptEncryptedKey(
+
   EncryptedKeyVersion encryptedKeyVersion) throws IOException,
   GeneralSecurityException 
{
 checkNotNull(encryptedKeyVersion.getEncryptionKeyVersionName(),


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated (337f2fc5660 -> e4d46d89e75)

2024-08-21 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 337f2fc5660 HADOOP-18962. Upgrade kafka to 3.4.0 (#6247)
 new a455823e879 HADOOP-19249. KMSClientProvider raises NPE with unauthed 
user (#6984)
 new e4d46d89e75 HADOOP-18542. Keep MSI tenant ID and client ID optional 
(#4262)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hadoop/crypto/key/kms/KMSClientProvider.java   | 12 +---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  4 +--
 .../fs/azurebfs/TestAccountConfiguration.java  | 33 +-
 3 files changed, 36 insertions(+), 13 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated (012ae9d1aa0 -> 68fcd7234ca)

2024-08-21 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 012ae9d1aa0 HDFS-17606. Do not require implementing 
CustomizedCallbackHandler. (#7005)
 add 68fcd7234ca HADOOP-18542. Keep MSI tenant ID and client ID optional 
(#4262)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  4 +--
 .../fs/azurebfs/TestAccountConfiguration.java  | 33 +-
 2 files changed, 28 insertions(+), 9 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated (33c9ecb6521 -> b15ed27cfbf)

2024-08-20 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 33c9ecb6521 HADOOP-19249. KMSClientProvider raises NPE with unauthed 
user (#6984)
 add b15ed27cfbf HADOOP-19187: [ABFS][FNSOverBlob] AbfsClient Refactoring 
to Support Multiple Implementation of Clients. (#6879)

No new revisions were added by this update.

Summary of changes:
 .../src/config/checkstyle-suppressions.xml |2 +
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |   83 +-
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|   56 +-
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |  102 +-
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |2 +
 ...HttpOperationType.java => AbfsServiceType.java} |   19 +-
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   28 +-
 .../fs/azurebfs/constants/FSOperationType.java |3 +-
 .../constants/FileSystemConfigurations.java|1 +
 .../azurebfs/constants/FileSystemUriSchemes.java   |5 +-
 .../InvalidConfigurationValueException.java|4 +
 .../hadoop/fs/azurebfs/services/AbfsClient.java| 1249 +++
 .../fs/azurebfs/services/AbfsClientHandler.java|  127 ++
 .../hadoop/fs/azurebfs/services/AbfsDfsClient.java | 1302 
 .../apache/hadoop/fs/azurebfs/utils/UriUtils.java  |   36 +
 .../hadoop-azure/src/site/markdown/fns_blob.md |   82 ++
 .../hadoop-azure/src/site/markdown/index.md|1 +
 .../fs/azurebfs/ITestAbfsCustomEncryption.java |3 +-
 .../ITestAzureBlobFileSystemCheckAccess.java   |9 +-
 .../ITestAzureBlobFileSystemInitAndCreate.java |   44 +-
 .../fs/azurebfs/ITestGetNameSpaceEnabled.java  |   14 +-
 .../fs/azurebfs/services/ITestAbfsClient.java  |9 +-
 22 files changed, 2266 insertions(+), 915 deletions(-)
 copy 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/{HttpOperationType.java
 => AbfsServiceType.java} (59%)
 create mode 100644 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientHandler.java
 create mode 100644 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsDfsClient.java
 create mode 100644 hadoop-tools/hadoop-azure/src/site/markdown/fns_blob.md


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4.1 updated (75845e66851 -> eb0732e0792)

2024-08-20 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 75845e66851 HADOOP-19253. Google GCS compilation fails due to VectorIO 
changes (#7002)
 new 46336b3803c HADOOP-18962. Upgrade kafka to 3.4.0 (#6247)
 new eb0732e0792 HADOOP-19249. KMSClientProvider raises NPE with unauthed 
user (#6984)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 LICENSE-binary   |  4 ++--
 .../org/apache/hadoop/crypto/key/kms/KMSClientProvider.java  | 12 
 hadoop-project/pom.xml   |  2 +-
 3 files changed, 11 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) 01/02: HADOOP-18962. Upgrade kafka to 3.4.0 (#6247)

2024-08-20 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 46336b3803cc7022f5a7bc081045cbedce7bf909
Author: Steve Loughran 
AuthorDate: Tue Aug 20 13:54:42 2024 +0100

HADOOP-18962. Upgrade kafka to 3.4.0 (#6247)

Upgrade Kafka Client due to CVEs

* CVE-2023-25194
* CVE-2021-38153
* CVE-2018-17196

Contributed by Murali Krishna
---
 LICENSE-binary | 4 ++--
 hadoop-project/pom.xml | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 252f934eac0..887e7070967 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -317,7 +317,7 @@ org.apache.htrace:htrace-core:3.1.0-incubating
 org.apache.htrace:htrace-core4:4.1.0-incubating
 org.apache.httpcomponents:httpclient:4.5.13
 org.apache.httpcomponents:httpcore:4.4.13
-org.apache.kafka:kafka-clients:2.8.2
+org.apache.kafka:kafka-clients:3.4.0
 org.apache.kerby:kerb-admin:2.0.3
 org.apache.kerby:kerb-client:2.0.3
 org.apache.kerby:kerb-common:2.0.3
@@ -377,7 +377,7 @@ 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/com
 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/compat/{fstatat|openat|unlinkat}.h
 
-com.github.luben:zstd-jni:1.4.9-1
+com.github.luben:zstd-jni:1.5.2-1
 dnsjava:dnsjava:3.6.1
 org.codehaus.woodstox:stax2-api:4.2.1
 
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 9e08106bf0a..780b1461857 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -50,7 +50,7 @@
 
 2.12.2
 
-2.8.2
+3.4.0
 
 1.0.13
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) 02/02: HADOOP-19249. KMSClientProvider raises NPE with unauthed user (#6984)

2024-08-20 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit eb0732e07926bb706f2ecdc40a85c31fa814d22e
Author: dhavalshah9131 <35031652+dhavalshah9...@users.noreply.github.com>
AuthorDate: Tue Aug 20 18:33:05 2024 +0530

HADOOP-19249. KMSClientProvider raises NPE with unauthed user (#6984)

KMSClientProvider raises a NullPointerException when an unauthorised user
tries to perform the key operation

Contributed by Dhaval Shah
---
 .../org/apache/hadoop/crypto/key/kms/KMSClientProvider.java  | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index f0c912224f9..10f7b428ad1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.crypto.key.kms;
 
 import org.apache.commons.codec.binary.Base64;
+import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.crypto.key.KeyProvider;
@@ -561,17 +562,19 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   }
   throw ex;
 }
+
 if ((conn.getResponseCode() == HttpURLConnection.HTTP_FORBIDDEN
-&& (conn.getResponseMessage().equals(ANONYMOUS_REQUESTS_DISALLOWED) ||
-conn.getResponseMessage().contains(INVALID_SIGNATURE)))
+&& (!StringUtils.isEmpty(conn.getResponseMessage())
+&& (conn.getResponseMessage().equals(ANONYMOUS_REQUESTS_DISALLOWED)
+|| conn.getResponseMessage().contains(INVALID_SIGNATURE
 || conn.getResponseCode() == HttpURLConnection.HTTP_UNAUTHORIZED) {
   // Ideally, this should happen only when there is an Authentication
   // failure. Unfortunately, the AuthenticationFilter returns 403 when it
   // cannot authenticate (Since a 401 requires Server to send
   // WWW-Authenticate header as well)..
   if (LOG.isDebugEnabled()) {
-LOG.debug("Response={}({}), resetting authToken",
-conn.getResponseCode(), conn.getResponseMessage());
+LOG.debug("Response={}, resetting authToken",
+conn.getResponseCode());
   }
   KMSClientProvider.this.authToken =
   new DelegationTokenAuthenticatedURL.Token();
@@ -798,6 +801,7 @@ public class KMSClientProvider extends KeyProvider 
implements CryptoExtension,
   @SuppressWarnings("rawtypes")
   @Override
   public KeyVersion decryptEncryptedKey(
+
   EncryptedKeyVersion encryptedKeyVersion) throws IOException,
   GeneralSecurityException 
{
 checkNotNull(encryptedKeyVersion.getEncryptionKeyVersionName(),


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated (2fd7cf53fac -> 33c9ecb6521)

2024-08-20 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 2fd7cf53fac HADOOP-19253. Google GCS compilation fails due to VectorIO 
changes (#7002)
 add 33c9ecb6521 HADOOP-19249. KMSClientProvider raises NPE with unauthed 
user (#6984)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/crypto/key/kms/KMSClientProvider.java  | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-18962. Upgrade kafka to 3.4.0 (#6247)

2024-08-20 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 337f2fc5660 HADOOP-18962. Upgrade kafka to 3.4.0 (#6247)
337f2fc5660 is described below

commit 337f2fc5660b9c0b3b8708baa871175b36341b9b
Author: Steve Loughran 
AuthorDate: Tue Aug 20 13:54:42 2024 +0100

HADOOP-18962. Upgrade kafka to 3.4.0 (#6247)


Upgrade Kafka Client due to CVEs

* CVE-2023-25194
* CVE-2021-38153
* CVE-2018-17196

Contributed by Murali Krishna
---
 LICENSE-binary | 4 ++--
 hadoop-project/pom.xml | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 252f934eac0..887e7070967 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -317,7 +317,7 @@ org.apache.htrace:htrace-core:3.1.0-incubating
 org.apache.htrace:htrace-core4:4.1.0-incubating
 org.apache.httpcomponents:httpclient:4.5.13
 org.apache.httpcomponents:httpcore:4.4.13
-org.apache.kafka:kafka-clients:2.8.2
+org.apache.kafka:kafka-clients:3.4.0
 org.apache.kerby:kerb-admin:2.0.3
 org.apache.kerby:kerb-client:2.0.3
 org.apache.kerby:kerb-common:2.0.3
@@ -377,7 +377,7 @@ 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/com
 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/compat/{fstatat|openat|unlinkat}.h
 
-com.github.luben:zstd-jni:1.4.9-1
+com.github.luben:zstd-jni:1.5.2-1
 dnsjava:dnsjava:3.6.1
 org.codehaus.woodstox:stax2-api:4.2.1
 
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 5b1e55afd91..c4aa1a39018 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -50,7 +50,7 @@
 
 2.12.2
 
-2.8.2
+3.4.0
 
 1.0.13
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19253. Google GCS compilation fails due to VectorIO changes (#7002)

2024-08-19 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2fd7cf53fac HADOOP-19253. Google GCS compilation fails due to VectorIO 
changes (#7002)
2fd7cf53fac is described below

commit 2fd7cf53facec3aa649f1f2cc53f8e21c209e178
Author: Steve Loughran 
AuthorDate: Mon Aug 19 19:54:47 2024 +0100

HADOOP-19253. Google GCS compilation fails due to VectorIO changes (#7002)

Fixes a compilation failure caused by HADOOP-19098

Restore original sortRanges() method signature,
  FileRange[] sortRanges(List)

This ensures that google GCS connector will compile again.
It has also been marked as Stable so it is left alone

The version returning List
has been renamed sortRangeList()

Contributed by Steve Loughran
---
 .../org/apache/hadoop/fs/VectoredReadUtils.java| 17 +--
 .../hadoop/fs/impl/TestVectoredReadUtils.java  | 35 ++
 2 files changed, 44 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
index fa0440620a4..2f99edc910c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
@@ -308,7 +308,7 @@ public final class VectoredReadUtils {
   validateRangeRequest(input.get(0));
   sortedRanges = input;
 } else {
-  sortedRanges = sortRanges(input);
+  sortedRanges = sortRangeList(input);
   FileRange prev = null;
   for (final FileRange current : sortedRanges) {
 validateRangeRequest(current);
@@ -341,12 +341,25 @@ public final class VectoredReadUtils {
* @param input input ranges.
* @return a new list of the ranges, sorted by offset.
*/
-  public static List sortRanges(List 
input) {
+  public static List sortRangeList(List input) {
 final List l = new ArrayList<>(input);
 l.sort(Comparator.comparingLong(FileRange::getOffset));
 return l;
   }
 
+  /**
+   * Sort the input ranges by offset; no validation is done.
+   * 
+   * This method is used externally and must be retained with
+   * the signature unchanged.
+   * @param input input ranges.
+   * @return a new list of the ranges, sorted by offset.
+   */
+  @InterfaceStability.Stable
+  public static FileRange[] sortRanges(List input) {
+return sortRangeList(input).toArray(new FileRange[0]);
+  }
+
   /**
* Merge sorted ranges to optimize the access from the underlying file
* system.
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
index 3fd3fe4d1f4..b08fc95279a 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
@@ -23,6 +23,7 @@ import java.io.IOException;
 import java.nio.ByteBuffer;
 import java.nio.IntBuffer;
 import java.util.Collections;
+import java.util.Comparator;
 import java.util.List;
 import java.util.Optional;
 import java.util.concurrent.CompletableFuture;
@@ -47,6 +48,7 @@ import static 
org.apache.hadoop.fs.VectoredReadUtils.isOrderedDisjoint;
 import static org.apache.hadoop.fs.VectoredReadUtils.mergeSortedRanges;
 import static org.apache.hadoop.fs.VectoredReadUtils.readRangeFrom;
 import static org.apache.hadoop.fs.VectoredReadUtils.readVectored;
+import static org.apache.hadoop.fs.VectoredReadUtils.sortRangeList;
 import static org.apache.hadoop.fs.VectoredReadUtils.sortRanges;
 import static org.apache.hadoop.fs.VectoredReadUtils.validateAndSortRanges;
 import static org.apache.hadoop.test.LambdaTestUtils.intercept;
@@ -196,7 +198,7 @@ public class TestVectoredReadUtils extends HadoopTestBase {
 );
 assertIsNotOrderedDisjoint(input, 100, 800);
 final List outputList = mergeSortedRanges(
-sortRanges(input), 100, 1001, 2500);
+sortRangeList(input), 100, 1001, 2500);
 
 assertRangeListSize(outputList, 1);
 CombinedFileRange output = outputList.get(0);
@@ -208,7 +210,7 @@ public class TestVectoredReadUtils extends HadoopTestBase {
 // the minSeek doesn't allow the first two to merge
 assertIsNotOrderedDisjoint(input, 100, 100);
 final List list2 = mergeSortedRanges(
-sortRanges(input),
+sortRangeList(input),
 100, 1000, 2100);
 assertRangeListSize(list2, 2);
 assertRangeElement(list2, 0, 1000, 100);
@@ -219,7 +221,

(hadoop) branch branch-3.4.1 updated: HADOOP-19253. Google GCS compilation fails due to VectorIO changes (#7002)

2024-08-19 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4.1 by this push:
 new 75845e66851 HADOOP-19253. Google GCS compilation fails due to VectorIO 
changes (#7002)
75845e66851 is described below

commit 75845e6685126713a01a692b47eaf77f26f62849
Author: Steve Loughran 
AuthorDate: Mon Aug 19 19:54:47 2024 +0100

HADOOP-19253. Google GCS compilation fails due to VectorIO changes (#7002)

Fixes a compilation failure ncaused by HADOOP-19098

Restore original sortRanges() method signature,
  FileRange[] sortRanges(List)

This ensures that google GCS connector will compile again.
It has also been marked as Stable so it is left alone

The version returning List
has been renamed sortRangeList()

Contributed by Steve Loughran
---
 .../org/apache/hadoop/fs/VectoredReadUtils.java| 17 +--
 .../hadoop/fs/impl/TestVectoredReadUtils.java  | 35 ++
 2 files changed, 44 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
index fa0440620a4..2f99edc910c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
@@ -308,7 +308,7 @@ public final class VectoredReadUtils {
   validateRangeRequest(input.get(0));
   sortedRanges = input;
 } else {
-  sortedRanges = sortRanges(input);
+  sortedRanges = sortRangeList(input);
   FileRange prev = null;
   for (final FileRange current : sortedRanges) {
 validateRangeRequest(current);
@@ -341,12 +341,25 @@ public final class VectoredReadUtils {
* @param input input ranges.
* @return a new list of the ranges, sorted by offset.
*/
-  public static List sortRanges(List 
input) {
+  public static List sortRangeList(List input) {
 final List l = new ArrayList<>(input);
 l.sort(Comparator.comparingLong(FileRange::getOffset));
 return l;
   }
 
+  /**
+   * Sort the input ranges by offset; no validation is done.
+   * 
+   * This method is used externally and must be retained with
+   * the signature unchanged.
+   * @param input input ranges.
+   * @return a new list of the ranges, sorted by offset.
+   */
+  @InterfaceStability.Stable
+  public static FileRange[] sortRanges(List input) {
+return sortRangeList(input).toArray(new FileRange[0]);
+  }
+
   /**
* Merge sorted ranges to optimize the access from the underlying file
* system.
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
index 3fd3fe4d1f4..b08fc95279a 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
@@ -23,6 +23,7 @@ import java.io.IOException;
 import java.nio.ByteBuffer;
 import java.nio.IntBuffer;
 import java.util.Collections;
+import java.util.Comparator;
 import java.util.List;
 import java.util.Optional;
 import java.util.concurrent.CompletableFuture;
@@ -47,6 +48,7 @@ import static 
org.apache.hadoop.fs.VectoredReadUtils.isOrderedDisjoint;
 import static org.apache.hadoop.fs.VectoredReadUtils.mergeSortedRanges;
 import static org.apache.hadoop.fs.VectoredReadUtils.readRangeFrom;
 import static org.apache.hadoop.fs.VectoredReadUtils.readVectored;
+import static org.apache.hadoop.fs.VectoredReadUtils.sortRangeList;
 import static org.apache.hadoop.fs.VectoredReadUtils.sortRanges;
 import static org.apache.hadoop.fs.VectoredReadUtils.validateAndSortRanges;
 import static org.apache.hadoop.test.LambdaTestUtils.intercept;
@@ -196,7 +198,7 @@ public class TestVectoredReadUtils extends HadoopTestBase {
 );
 assertIsNotOrderedDisjoint(input, 100, 800);
 final List outputList = mergeSortedRanges(
-sortRanges(input), 100, 1001, 2500);
+sortRangeList(input), 100, 1001, 2500);
 
 assertRangeListSize(outputList, 1);
 CombinedFileRange output = outputList.get(0);
@@ -208,7 +210,7 @@ public class TestVectoredReadUtils extends HadoopTestBase {
 // the minSeek doesn't allow the first two to merge
 assertIsNotOrderedDisjoint(input, 100, 100);
 final List list2 = mergeSortedRanges(
-sortRanges(input),
+sortRangeList(input),
 100, 1000, 2100);
 assertRangeListSize(list2, 2);
 assertRangeElement(list2, 0, 1000, 100)

(hadoop) branch branch-3.4 updated: HADOOP-19253. Google GCS compilation fails due to VectorIO changes (#7002)

2024-08-19 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new ed23a83b3cc HADOOP-19253. Google GCS compilation fails due to VectorIO 
changes (#7002)
ed23a83b3cc is described below

commit ed23a83b3cc537f47789a5c3402425bf77e730dd
Author: Steve Loughran 
AuthorDate: Mon Aug 19 19:54:47 2024 +0100

HADOOP-19253. Google GCS compilation fails due to VectorIO changes (#7002)


Fixes a compilation failure ncaused by HADOOP-19098

Restore original sortRanges() method signature,
  FileRange[] sortRanges(List)

This ensures that google GCS connector will compile again.
It has also been marked as Stable so it is left alone

The version returning List
has been renamed sortRangeList()

Contributed by Steve Loughran
---
 .../org/apache/hadoop/fs/VectoredReadUtils.java| 17 +--
 .../hadoop/fs/impl/TestVectoredReadUtils.java  | 35 ++
 2 files changed, 44 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
index fa0440620a4..2f99edc910c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
@@ -308,7 +308,7 @@ public final class VectoredReadUtils {
   validateRangeRequest(input.get(0));
   sortedRanges = input;
 } else {
-  sortedRanges = sortRanges(input);
+  sortedRanges = sortRangeList(input);
   FileRange prev = null;
   for (final FileRange current : sortedRanges) {
 validateRangeRequest(current);
@@ -341,12 +341,25 @@ public final class VectoredReadUtils {
* @param input input ranges.
* @return a new list of the ranges, sorted by offset.
*/
-  public static List sortRanges(List 
input) {
+  public static List sortRangeList(List input) {
 final List l = new ArrayList<>(input);
 l.sort(Comparator.comparingLong(FileRange::getOffset));
 return l;
   }
 
+  /**
+   * Sort the input ranges by offset; no validation is done.
+   * 
+   * This method is used externally and must be retained with
+   * the signature unchanged.
+   * @param input input ranges.
+   * @return a new list of the ranges, sorted by offset.
+   */
+  @InterfaceStability.Stable
+  public static FileRange[] sortRanges(List input) {
+return sortRangeList(input).toArray(new FileRange[0]);
+  }
+
   /**
* Merge sorted ranges to optimize the access from the underlying file
* system.
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
index 3fd3fe4d1f4..b08fc95279a 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
@@ -23,6 +23,7 @@ import java.io.IOException;
 import java.nio.ByteBuffer;
 import java.nio.IntBuffer;
 import java.util.Collections;
+import java.util.Comparator;
 import java.util.List;
 import java.util.Optional;
 import java.util.concurrent.CompletableFuture;
@@ -47,6 +48,7 @@ import static 
org.apache.hadoop.fs.VectoredReadUtils.isOrderedDisjoint;
 import static org.apache.hadoop.fs.VectoredReadUtils.mergeSortedRanges;
 import static org.apache.hadoop.fs.VectoredReadUtils.readRangeFrom;
 import static org.apache.hadoop.fs.VectoredReadUtils.readVectored;
+import static org.apache.hadoop.fs.VectoredReadUtils.sortRangeList;
 import static org.apache.hadoop.fs.VectoredReadUtils.sortRanges;
 import static org.apache.hadoop.fs.VectoredReadUtils.validateAndSortRanges;
 import static org.apache.hadoop.test.LambdaTestUtils.intercept;
@@ -196,7 +198,7 @@ public class TestVectoredReadUtils extends HadoopTestBase {
 );
 assertIsNotOrderedDisjoint(input, 100, 800);
 final List outputList = mergeSortedRanges(
-sortRanges(input), 100, 1001, 2500);
+sortRangeList(input), 100, 1001, 2500);
 
 assertRangeListSize(outputList, 1);
 CombinedFileRange output = outputList.get(0);
@@ -208,7 +210,7 @@ public class TestVectoredReadUtils extends HadoopTestBase {
 // the minSeek doesn't allow the first two to merge
 assertIsNotOrderedDisjoint(input, 100, 100);
 final List list2 = mergeSortedRanges(
-sortRanges(input),
+sortRangeList(input),
 100, 1000, 2100);
 assertRangeListSize(list2, 2);
 assertRangeElement(list2, 0, 1000, 100)

(hadoop) branch branch-3.4.1 updated: HADOOP-19136. Upgrade commons-io to 2.16.1. (#6704)

2024-08-16 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4.1 by this push:
 new 2daff9320b1 HADOOP-19136. Upgrade commons-io to 2.16.1. (#6704)
2daff9320b1 is described below

commit 2daff9320b14f69a912d8d41e0cdb58fe2852cf3
Author: slfan1989 <55643692+slfan1...@users.noreply.github.com>
AuthorDate: Sat Aug 17 02:42:26 2024 +0800

HADOOP-19136. Upgrade commons-io to 2.16.1. (#6704)

Contributed by Shilun Fan.
---
 LICENSE-binary | 2 +-
 hadoop-project/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 3533218abcd..252f934eac0 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -247,7 +247,7 @@ commons-cli:commons-cli:1.5.0
 commons-codec:commons-codec:1.11
 commons-collections:commons-collections:3.2.2
 commons-daemon:commons-daemon:1.0.13
-commons-io:commons-io:2.14.0
+commons-io:commons-io:2.16.1
 commons-net:commons-net:3.9.0
 de.ruedigermoeller:fst:2.50
 io.grpc:grpc-api:1.53.0
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index a0e4186b801..9e08106bf0a 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -124,7 +124,7 @@
 3.2.2
 1.26.1
 1.9.0
-2.14.0
+2.16.1
 3.12.0
 1.2
 3.6.1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19136. Upgrade commons-io to 2.16.1. (#6704)

2024-08-16 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 8a3a9509765 HADOOP-19136. Upgrade commons-io to 2.16.1. (#6704)
8a3a9509765 is described below

commit 8a3a9509765fa2ed8a641700fca2c715f5c5aafb
Author: slfan1989 <55643692+slfan1...@users.noreply.github.com>
AuthorDate: Sat Aug 17 02:42:26 2024 +0800

HADOOP-19136. Upgrade commons-io to 2.16.1. (#6704)

Contributed by Shilun Fan.
---
 LICENSE-binary | 2 +-
 hadoop-project/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 3533218abcd..252f934eac0 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -247,7 +247,7 @@ commons-cli:commons-cli:1.5.0
 commons-codec:commons-codec:1.11
 commons-collections:commons-collections:3.2.2
 commons-daemon:commons-daemon:1.0.13
-commons-io:commons-io:2.14.0
+commons-io:commons-io:2.16.1
 commons-net:commons-net:3.9.0
 de.ruedigermoeller:fst:2.50
 io.grpc:grpc-api:1.53.0
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 83fe729ef8f..5b1e55afd91 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -124,7 +124,7 @@
 3.2.2
 1.26.1
 1.9.0
-2.14.0
+2.16.1
 3.12.0
 1.2
 3.6.1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated (bf804cb64be -> b5f88990b72)

2024-08-16 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from bf804cb64be HADOOP-19250. Fix test 
TestServiceInterruptHandling.testRegisterAndRaise (#6987)
 add b5f88990b72 HADOOP-19136. Upgrade commons-io to 2.16.1. (#6704)

No new revisions were added by this update.

Summary of changes:
 LICENSE-binary | 2 +-
 hadoop-project/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop-thirdparty) branch trunk updated: Revert "HADOOP-19252. Release hadoop-thirdparty 1.3.0: version update on trunk"

2024-08-16 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop-thirdparty.git


The following commit(s) were added to refs/heads/trunk by this push:
 new cd3730b  Revert "HADOOP-19252. Release hadoop-thirdparty 1.3.0: 
version update on trunk"
cd3730b is described below

commit cd3730b9a85657a628aeffd15bd84bd183a063dd
Author: Steve Loughran 
AuthorDate: Fri Aug 16 18:49:18 2024 +0100

Revert "HADOOP-19252. Release hadoop-thirdparty 1.3.0: version update on 
trunk"

Roll back so hadoop trunk builds can continue unaffected until
the new artifacts are released

This reverts commit bedef49c7f37ff2c7ce1ab37fa5725a925d4624a.
---
 hadoop-shaded-avro_1_11/pom.xml | 2 +-
 hadoop-shaded-guava/pom.xml | 2 +-
 hadoop-shaded-protobuf_3_25/pom.xml | 2 +-
 pom.xml | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/hadoop-shaded-avro_1_11/pom.xml b/hadoop-shaded-avro_1_11/pom.xml
index 5a07faf..12e4dab 100644
--- a/hadoop-shaded-avro_1_11/pom.xml
+++ b/hadoop-shaded-avro_1_11/pom.xml
@@ -23,7 +23,7 @@
 
 hadoop-thirdparty
 org.apache.hadoop.thirdparty
-1.4.0-SNAPSHOT
+1.3.0-SNAPSHOT
 ..
 
 4.0.0
diff --git a/hadoop-shaded-guava/pom.xml b/hadoop-shaded-guava/pom.xml
index b03646f..fcfbb60 100644
--- a/hadoop-shaded-guava/pom.xml
+++ b/hadoop-shaded-guava/pom.xml
@@ -23,7 +23,7 @@
 
 hadoop-thirdparty
 org.apache.hadoop.thirdparty
-1.4.0-SNAPSHOT
+1.3.0-SNAPSHOT
 ../pom.xml
 
 4.0.0
diff --git a/hadoop-shaded-protobuf_3_25/pom.xml 
b/hadoop-shaded-protobuf_3_25/pom.xml
index 9a90d5a..3e9a98d 100644
--- a/hadoop-shaded-protobuf_3_25/pom.xml
+++ b/hadoop-shaded-protobuf_3_25/pom.xml
@@ -23,7 +23,7 @@
   
 hadoop-thirdparty
 org.apache.hadoop.thirdparty
-1.4.0-SNAPSHOT
+1.3.0-SNAPSHOT
 ../pom.xml
   
   4.0.0
diff --git a/pom.xml b/pom.xml
index 98879d8..e98cf77 100644
--- a/pom.xml
+++ b/pom.xml
@@ -23,7 +23,7 @@
   4.0.0
   org.apache.hadoop.thirdparty
   hadoop-thirdparty
-  1.4.0-SNAPSHOT
+  1.3.0-SNAPSHOT
   
 org.apache
 apache


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19153. hadoop-common exports logback as a transitive dependency (#6999)

2024-08-16 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 5f93edfd707 HADOOP-19153. hadoop-common exports logback as a 
transitive dependency (#6999)
5f93edfd707 is described below

commit 5f93edfd70784aa4f3ff392ef065c78a6fc532ea
Author: Steve Loughran 
AuthorDate: Fri Aug 16 13:41:35 2024 +0100

HADOOP-19153. hadoop-common exports logback as a transitive dependency 
(#6999)

- Critical: remove the obsolete exclusion list from hadoop-common.
- Diligence: expand the hadoop-project exclusion list to exclude
  all ch.qos.logback artifacts

Contributed by Steve Loughran
---
 hadoop-common-project/hadoop-common/pom.xml | 19 ---
 hadoop-project/pom.xml  |  6 +-
 2 files changed, 1 insertion(+), 24 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/pom.xml 
b/hadoop-common-project/hadoop-common/pom.xml
index 90d66779734..06c6b06ec6a 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -330,25 +330,6 @@
 
   org.apache.zookeeper
   zookeeper
-  
-
-  org.jboss.netty
-  netty
-
-
-  
-  junit
-  junit
-
-
-  com.sun.jdmk
-  jmxtools
-
-
-  com.sun.jmx
-  jmxri
-
-  
 
 
   io.netty
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 3f0a8b3a85f..8c8f675f98b 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1484,11 +1484,7 @@
   
   
 ch.qos.logback
-logback-core
-  
-  
-ch.qos.logback
-logback-classic
+*
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4.1 updated: HADOOP-19153. hadoop-common exports logback as a transitive dependency (#6999)

2024-08-16 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4.1 by this push:
 new 96a73b819bb HADOOP-19153. hadoop-common exports logback as a 
transitive dependency (#6999)
96a73b819bb is described below

commit 96a73b819bbbaad2cd2f0a1d9e0e6356b67dfeec
Author: Steve Loughran 
AuthorDate: Fri Aug 16 13:41:35 2024 +0100

HADOOP-19153. hadoop-common exports logback as a transitive dependency 
(#6999)

- Critical: remove the obsolete exclusion list from hadoop-common.
- Diligence: expand the hadoop-project exclusion list to exclude
  all ch.qos.logback artifacts

Contributed by Steve Loughran
---
 hadoop-common-project/hadoop-common/pom.xml | 19 ---
 hadoop-project/pom.xml  |  6 +-
 2 files changed, 1 insertion(+), 24 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/pom.xml 
b/hadoop-common-project/hadoop-common/pom.xml
index 66f89210d3a..31fd2923e99 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -330,25 +330,6 @@
 
   org.apache.zookeeper
   zookeeper
-  
-
-  org.jboss.netty
-  netty
-
-
-  
-  junit
-  junit
-
-
-  com.sun.jdmk
-  jmxtools
-
-
-  com.sun.jmx
-  jmxri
-
-  
 
 
   io.netty
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 3e432963f4f..a0e4186b801 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1479,11 +1479,7 @@
   
   
 ch.qos.logback
-logback-core
-  
-  
-ch.qos.logback
-logback-classic
+*
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4.1 updated (61f157cfc7a -> 89efe8cfc59)

2024-08-16 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch branch-3.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 61f157cfc7a HADOOP-19237. Upgrade to dnsjava 3.6.1 due to CVEs (#6961) 
(#6971)
 add 91f6840c780 HDFS-17591. RBF: Router should follow X-FRAME-OPTIONS 
protection setting (#6963)
 add ac6b2c2b657 HADOOP-19245. S3ABlockOutputStream no longer sends 
progress events in close() (#6974)
 add e2966036563 HADOOP-17609. Make SM4 support optional for OpenSSL native 
code. (#3019)
 add 5ea68e9548a HADOOP-19072. S3A: expand optimisations on stores with 
"fs.s3a.performance.flags" for mkdir (#6543)
 add dd483856f40 HADOOP-19072 S3A: Override fs.s3a.performance.flags for 
tests (ADDENDUM 2) (#6993)
 add 89efe8cfc59 HADOOP-19131. Assist reflection IO with WrappedOperations 
class (#6686)

No new revisions were added by this update.

Summary of changes:
 .../dev-support/findbugsExcludeFile.xml|   6 +
 .../org/apache/hadoop/crypto/OpensslCipher.java|  16 +
 .../hadoop/crypto/OpensslSm4CtrCryptoCodec.java|   4 +
 .../apache/hadoop/fs/CommonPathCapabilities.java   |  16 +
 .../org/apache/hadoop/fs/FSDataInputStream.java|   8 +
 .../main/java/org/apache/hadoop/fs/Options.java|  65 +-
 .../org/apache/hadoop/fs/RawLocalFileSystem.java   |   2 +
 .../org/apache/hadoop/io/wrappedio/WrappedIO.java  | 149 -
 .../hadoop/io/wrappedio/WrappedStatistics.java | 357 +++
 .../hadoop/io/wrappedio/impl/DynamicWrappedIO.java | 500 +++
 .../wrappedio/impl/DynamicWrappedStatistics.java   | 678 +
 .../wrappedio/impl/package-info.java}  |  23 +-
 .../wrappedio/package-info.java}   |  33 +-
 .../apache/hadoop/util/dynamic/BindingUtils.java   | 214 +++
 .../hadoop/util/dynamic/DynConstructors.java   | 273 +
 .../org/apache/hadoop/util/dynamic/DynMethods.java | 544 +
 .../package-info.java} |  25 +-
 .../util/functional/BiFunctionRaisingIOE.java  |  16 +
 .../hadoop/util/functional/CallableRaisingIOE.java |  19 +
 .../hadoop/util/functional/FunctionRaisingIOE.java |  15 +
 .../hadoop/util/functional/FunctionalIO.java   |  23 +-
 .../org/apache/hadoop/util/functional/Tuples.java  |  17 +
 .../src/org/apache/hadoop/crypto/OpensslCipher.c   |  26 +-
 .../filesystem/fsdatainputstreambuilder.md |  95 ++-
 .../filesystem/fsdataoutputstreambuilder.md|   4 +-
 .../org/apache/hadoop/crypto/TestCryptoCodec.java  |  13 +-
 ...tCryptoStreamsWithOpensslSm4CtrCryptoCodec.java |   2 +
 .../apache/hadoop/crypto/TestOpensslCipher.java|  10 +
 .../hadoop/fs/FileContextCreateMkdirBaseTest.java  |  21 +-
 .../contract/AbstractContractBulkDeleteTest.java   |  28 +-
 .../fs/contract/AbstractContractMkdirTest.java |   7 +-
 .../hadoop/fs/contract/ContractTestUtils.java  |  18 +
 .../hadoop/io/wrappedio/impl/TestWrappedIO.java| 484 +++
 .../io/wrappedio/impl/TestWrappedStatistics.java   | 496 +++
 .../apache/hadoop/util/dynamic/Concatenator.java   |  85 +++
 .../hadoop/util/dynamic/TestDynConstructors.java   | 170 ++
 .../apache/hadoop/util/dynamic/TestDynMethods.java | 320 ++
 .../hadoop/util/functional/TestFunctionalIO.java   |  14 +
 .../src/test/resources/log4j.properties|   4 +-
 .../server/federation/router/RouterHttpServer.java |  11 +
 .../router/TestRouterHttpServerXFrame.java |  65 ++
 .../hadoop/fs/contract/hdfs/TestDFSWrappedIO.java  |  41 +-
 .../hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java  |  17 +
 .../apache/hadoop/fs/s3a/S3ABlockOutputStream.java |   3 +-
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java|  11 +-
 .../org/apache/hadoop/fs/s3a/S3AInputPolicy.java   |  24 +-
 .../apache/hadoop/fs/s3a/impl/MkdirOperation.java  |  77 ++-
 .../site/markdown/tools/hadoop-aws/performance.md  |  21 +-
 .../fs/contract/s3a/ITestS3AContractCreate.java|  12 +-
 .../fs/contract/s3a/ITestS3AContractMkdir.java |   9 +
 .../s3a/ITestS3AContractMkdirWithCreatePerf.java   |  68 +++
 .../contract/s3a/ITestS3AContractVectoredRead.java |   4 +-
 .../hadoop/fs/contract/s3a/ITestS3AWrappedIO.java  |  23 +-
 .../hadoop/fs/s3a/ITestS3AFSMainOperations.java|   6 +-
 .../hadoop/fs/s3a/ITestS3AFileOperationCost.java   |  13 +-
 .../hadoop/fs/s3a/ITestS3AFileSystemContract.java  |   5 +-
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java |  19 +
 .../ITestS3AFileContextCreateMkdir.java|   9 +-
 ... ITestS3AFileContextCreateMkdirCreatePerf.java} |  32 +-
 .../ITestS3AFileContextMainOperations.java |   7 +-
 .../fs/s3a/fileContext/ITestS3AFileContextURI.java |   6 +-
 .../hadoop/fs/s3a/impl/TestOpenFileSupport.java|  43 +-
 .../fs/s3a/performance/ITestCreateFileCost.java|  19 +-
 .../fs/s3a/performance/ITestS3ADeleteCost.java |  13 +-
 .../fs/s3a/p

(hadoop) branch branch-3.4 updated: HADOOP-19153. hadoop-common exports logback as a transitive dependency (#6999)

2024-08-16 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new d93e0b0f02a HADOOP-19153. hadoop-common exports logback as a 
transitive dependency (#6999)
d93e0b0f02a is described below

commit d93e0b0f02a2f8949752bf3cee00e699e87ffeaa
Author: Steve Loughran 
AuthorDate: Fri Aug 16 13:41:35 2024 +0100

HADOOP-19153. hadoop-common exports logback as a transitive dependency 
(#6999)


- Critical: remove the obsolete exclusion list from hadoop-common.
- Diligence: expand the hadoop-project exclusion list to exclude
  all ch.qos.logback artifacts

Contributed by Steve Loughran
---
 hadoop-common-project/hadoop-common/pom.xml | 19 ---
 hadoop-project/pom.xml  |  6 +-
 2 files changed, 1 insertion(+), 24 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/pom.xml 
b/hadoop-common-project/hadoop-common/pom.xml
index 5cd8a5e71da..4dc92419be0 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -330,25 +330,6 @@
 
   org.apache.zookeeper
   zookeeper
-  
-
-  org.jboss.netty
-  netty
-
-
-  
-  junit
-  junit
-
-
-  com.sun.jdmk
-  jmxtools
-
-
-  com.sun.jmx
-  jmxri
-
-  
 
 
   io.netty
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index d328e1b650e..83fe729ef8f 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1479,11 +1479,7 @@
   
   
 ch.qos.logback
-logback-core
-  
-  
-ch.qos.logback
-logback-classic
+*
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19072 S3A: Override fs.s3a.performance.flags for tests (ADDENDUM 2) (#6993)

2024-08-14 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 54c90d70942 HADOOP-19072 S3A: Override fs.s3a.performance.flags for 
tests (ADDENDUM 2) (#6993)
54c90d70942 is described below

commit 54c90d70942f27b1feeaf2a1746137c4eba6a6b3
Author: Viraj Jasani 
AuthorDate: Wed Aug 14 02:57:44 2024 -0700

HADOOP-19072 S3A: Override fs.s3a.performance.flags for tests (ADDENDUM 2) 
(#6993)

Second followup to #6543; all hadoop-aws integration tests complete 
correctly even when

fs.s3a.performance.flags = *

Contributed by Viraj Jasani
---
 .../fs/contract/s3a/ITestS3AContractCreate.java   | 12 
 .../hadoop/fs/contract/s3a/ITestS3AContractMkdir.java | 14 --
 .../s3a/ITestS3AContractMkdirWithCreatePerf.java  | 13 +++--
 .../hadoop/fs/s3a/ITestS3AFSMainOperations.java   |  6 +-
 .../hadoop/fs/s3a/ITestS3AFileOperationCost.java  | 13 -
 .../hadoop/fs/s3a/ITestS3AFileSystemContract.java |  5 -
 .../java/org/apache/hadoop/fs/s3a/S3ATestUtils.java   | 19 +++
 .../fileContext/ITestS3AFileContextCreateMkdir.java   | 12 
 .../ITestS3AFileContextCreateMkdirCreatePerf.java | 12 +++-
 .../ITestS3AFileContextMainOperations.java|  7 ++-
 .../fs/s3a/fileContext/ITestS3AFileContextURI.java|  6 +-
 .../fs/s3a/performance/ITestCreateFileCost.java   | 11 ---
 .../hadoop/fs/s3a/performance/ITestS3ADeleteCost.java | 13 -
 13 files changed, 69 insertions(+), 74 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractCreate.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractCreate.java
index a1067ddc0ec..a6590e99e6c 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractCreate.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractCreate.java
@@ -29,9 +29,7 @@ import 
org.apache.hadoop.fs.contract.AbstractContractCreateTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
 import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
-import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_CREATE_PERFORMANCE;
-import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_PERFORMANCE_FLAGS;
-import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides;
+import static org.apache.hadoop.fs.s3a.S3ATestUtils.setPerformanceFlags;
 
 /**
  * S3A contract tests creating files.
@@ -70,11 +68,9 @@ public class ITestS3AContractCreate extends 
AbstractContractCreateTest {
 
   @Override
   protected Configuration createConfiguration() {
-final Configuration conf = super.createConfiguration();
-removeBaseAndBucketOverrides(conf,
-FS_S3A_CREATE_PERFORMANCE,
-FS_S3A_PERFORMANCE_FLAGS);
-conf.setBoolean(FS_S3A_CREATE_PERFORMANCE, createPerformance);
+final Configuration conf = setPerformanceFlags(
+super.createConfiguration(),
+createPerformance ? "create" : "");
 S3ATestUtils.disableFilesystemCaching(conf);
 return conf;
   }
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMkdir.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMkdir.java
index bce67ed67f3..847f6980b56 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMkdir.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMkdir.java
@@ -22,9 +22,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.contract.AbstractContractMkdirTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
 
-import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_CREATE_PERFORMANCE;
-import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_PERFORMANCE_FLAGS;
-import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides;
+import static org.apache.hadoop.fs.s3a.S3ATestUtils.setPerformanceFlags;
 
 /**
  * Test dir operations on S3A.
@@ -33,13 +31,9 @@ public class ITestS3AContractMkdir extends 
AbstractContractMkdirTest {
 
   @Override
   protected Configuration createConfiguration() {
-Configuration conf = super.createConfiguration();
-removeBaseAndBucketOverrides(
-conf,
-FS_S3A_CREATE_PERFORMANCE,
-FS_S3A_PERFORMANCE_FLAGS);
-conf.set(FS_S3A_PERFORMANCE_FLAGS, "");
-return conf;
+return setPerformanceFlags(
+super.createConfiguration(),
+"");
   }
 
   @Override
diff --git 
a/hadoop-tools/hadoop-aws/src/tes

(hadoop) branch trunk updated: HADOOP-19072 S3A: Override fs.s3a.performance.flags for tests (ADDENDUM 2) (#6993)

2024-08-14 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new fa83c9a8050 HADOOP-19072 S3A: Override fs.s3a.performance.flags for 
tests (ADDENDUM 2) (#6993)
fa83c9a8050 is described below

commit fa83c9a805041b94b3663b773e99e8074c534770
Author: Viraj Jasani 
AuthorDate: Wed Aug 14 02:57:44 2024 -0700

HADOOP-19072 S3A: Override fs.s3a.performance.flags for tests (ADDENDUM 2) 
(#6993)


Second followup to #6543; all hadoop-aws integration tests complete 
correctly even when

fs.s3a.performance.flags = *

Contributed by Viraj Jasani
---
 .../fs/contract/s3a/ITestS3AContractCreate.java   | 12 
 .../hadoop/fs/contract/s3a/ITestS3AContractMkdir.java | 14 --
 .../s3a/ITestS3AContractMkdirWithCreatePerf.java  | 13 +++--
 .../hadoop/fs/s3a/ITestS3AFSMainOperations.java   |  6 +-
 .../hadoop/fs/s3a/ITestS3AFileOperationCost.java  | 13 -
 .../hadoop/fs/s3a/ITestS3AFileSystemContract.java |  5 -
 .../java/org/apache/hadoop/fs/s3a/S3ATestUtils.java   | 19 +++
 .../fileContext/ITestS3AFileContextCreateMkdir.java   | 12 
 .../ITestS3AFileContextCreateMkdirCreatePerf.java | 12 +++-
 .../ITestS3AFileContextMainOperations.java|  7 ++-
 .../fs/s3a/fileContext/ITestS3AFileContextURI.java|  6 +-
 .../fs/s3a/performance/ITestCreateFileCost.java   | 11 ---
 .../hadoop/fs/s3a/performance/ITestS3ADeleteCost.java | 13 -
 13 files changed, 69 insertions(+), 74 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractCreate.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractCreate.java
index a1067ddc0ec..a6590e99e6c 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractCreate.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractCreate.java
@@ -29,9 +29,7 @@ import 
org.apache.hadoop.fs.contract.AbstractContractCreateTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
 import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
-import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_CREATE_PERFORMANCE;
-import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_PERFORMANCE_FLAGS;
-import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides;
+import static org.apache.hadoop.fs.s3a.S3ATestUtils.setPerformanceFlags;
 
 /**
  * S3A contract tests creating files.
@@ -70,11 +68,9 @@ public class ITestS3AContractCreate extends 
AbstractContractCreateTest {
 
   @Override
   protected Configuration createConfiguration() {
-final Configuration conf = super.createConfiguration();
-removeBaseAndBucketOverrides(conf,
-FS_S3A_CREATE_PERFORMANCE,
-FS_S3A_PERFORMANCE_FLAGS);
-conf.setBoolean(FS_S3A_CREATE_PERFORMANCE, createPerformance);
+final Configuration conf = setPerformanceFlags(
+super.createConfiguration(),
+createPerformance ? "create" : "");
 S3ATestUtils.disableFilesystemCaching(conf);
 return conf;
   }
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMkdir.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMkdir.java
index bce67ed67f3..847f6980b56 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMkdir.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMkdir.java
@@ -22,9 +22,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.contract.AbstractContractMkdirTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
 
-import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_CREATE_PERFORMANCE;
-import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_PERFORMANCE_FLAGS;
-import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides;
+import static org.apache.hadoop.fs.s3a.S3ATestUtils.setPerformanceFlags;
 
 /**
  * Test dir operations on S3A.
@@ -33,13 +31,9 @@ public class ITestS3AContractMkdir extends 
AbstractContractMkdirTest {
 
   @Override
   protected Configuration createConfiguration() {
-Configuration conf = super.createConfiguration();
-removeBaseAndBucketOverrides(
-conf,
-FS_S3A_CREATE_PERFORMANCE,
-FS_S3A_PERFORMANCE_FLAGS);
-conf.set(FS_S3A_PERFORMANCE_FLAGS, "");
-return conf;
+return setPerformanceFlags(
+super.createConfiguration(),
+"");
   }
 
   @Override
diff --git 
a/hadoop-tools/hadoop-aws/src/tes

(hadoop) branch branch-3.4 updated: HADOOP-19072. S3A: expand optimisations on stores with "fs.s3a.performance.flags" for mkdir (#6543)

2024-08-12 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 0755b93ce45 HADOOP-19072. S3A: expand optimisations on stores with 
"fs.s3a.performance.flags" for mkdir (#6543)
0755b93ce45 is described below

commit 0755b93ce450f269eb211fb985c8a9ff9048a328
Author: Viraj Jasani 
AuthorDate: Thu Aug 8 09:48:51 2024 -0700

HADOOP-19072. S3A: expand optimisations on stores with 
"fs.s3a.performance.flags" for mkdir (#6543)

If the flag list in fs.s3a.performance.flags includes "mkdir"
then the safety check of a walk up the tree to look for a parent directory,
-done to verify a directory isn't being created under a file- are skipped.

This saves the cost of multiple list operations.

Includes:

HADOOP-19072. S3A: Override fs.s3a.performance.flags for tests (ADDENDUM) 
(#6985)

This is a followup to #6543 which ensures all test pass in configurations 
where
fs.s3a.performance.flags is set to "*" or contains "mkdirs"

Contributed by VJ Jasani
---
 .../filesystem/fsdataoutputstreambuilder.md|  4 +-
 .../hadoop/fs/FileContextCreateMkdirBaseTest.java  | 21 +++---
 .../fs/contract/AbstractContractMkdirTest.java |  7 +-
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java|  7 +-
 .../apache/hadoop/fs/s3a/impl/MkdirOperation.java  | 77 +-
 .../site/markdown/tools/hadoop-aws/performance.md  | 21 +-
 .../fs/contract/s3a/ITestS3AContractMkdir.java | 15 +
 .../s3a/ITestS3AContractMkdirWithCreatePerf.java   | 75 +
 .../ITestS3AFileContextCreateMkdir.java| 11 +++-
 .../ITestS3AFileContextCreateMkdirCreatePerf.java  | 67 +++
 10 files changed, 271 insertions(+), 34 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md
index 5f24e755697..7dd3170036c 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md
@@ -200,8 +200,8 @@ Prioritize file creation performance over safety checks for 
filesystem consisten
 This:
 1. Skips the `LIST` call which makes sure a file is being created over a 
directory.
Risk: a file is created over a directory.
-1. Ignores the overwrite flag.
-1. Never issues a `DELETE` call to delete parent directory markers.
+2. Ignores the overwrite flag.
+3. Never issues a `DELETE` call to delete parent directory markers.
 
 It is possible to probe an S3A Filesystem instance for this capability through
 the `hasPathCapability(path, "fs.s3a.create.performance")` check.
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextCreateMkdirBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextCreateMkdirBaseTest.java
index fbd598c9deb..fcb1b6925a4 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextCreateMkdirBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextCreateMkdirBaseTest.java
@@ -27,6 +27,7 @@ import org.junit.Test;
 import static org.apache.hadoop.fs.FileContextTestHelper.*;
 import static 
org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.assertIsFile;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
 
 import org.apache.hadoop.test.GenericTestUtils;
 import org.slf4j.event.Level;
@@ -55,7 +56,10 @@ public abstract class FileContextCreateMkdirBaseTest {
 
   protected final FileContextTestHelper fileContextTestHelper;
   protected static FileContext fc;
-  
+
+  public static final String MKDIR_FILE_PRESENT_ERROR =
+  " should have failed as a file was present";
+
   static {
 GenericTestUtils.setLogLevel(FileSystem.LOG, Level.DEBUG);
   }
@@ -128,7 +132,7 @@ public abstract class FileContextCreateMkdirBaseTest {
   }
 
   @Test
-  public void testMkdirRecursiveWithExistingFile() throws IOException {
+  public void testMkdirRecursiveWithExistingFile() throws Exception {
 Path f = getTestRootPath(fc, "NonExistant3/aDir");
 fc.mkdir(f, FileContext.DEFAULT_PERM, true);
 assertIsDirectory(fc.getFileStatus(f));
@@ -141,13 +145,12 @@ public abstract class FileContextCreateMkdirBaseTest {
 
 // try creating another folder which conflicts with filePath
 Path dirPath = new Path(filePath, "bDir/cDir");
-try {
-  fc.mkdir(dirPath, FileContext.DEFAULT_PER

(hadoop) branch trunk updated (321a6cc55ed -> 74ff00705cf)

2024-08-12 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 321a6cc55ed HADOOP-19072. S3A: expand optimisations on stores with 
"fs.s3a.performance.flags" for mkdir (#6543)
 add 74ff00705cf HADOOP-19072. S3A: Override fs.s3a.performance.flags for 
tests (ADDENDUM) (#6985)

No new revisions were added by this update.

Summary of changes:
 .../org/apache/hadoop/fs/contract/s3a/ITestS3AContractMkdir.java  | 8 ++--
 .../hadoop/fs/s3a/fileContext/ITestS3AFileContextCreateMkdir.java | 4 +++-
 2 files changed, 9 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19072. S3A: expand optimisations on stores with "fs.s3a.performance.flags" for mkdir (#6543)

2024-08-08 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 321a6cc55ed HADOOP-19072. S3A: expand optimisations on stores with 
"fs.s3a.performance.flags" for mkdir (#6543)
321a6cc55ed is described below

commit 321a6cc55ed2df5222bde7b5c801322e8cf68203
Author: Viraj Jasani 
AuthorDate: Thu Aug 8 09:48:51 2024 -0700

HADOOP-19072. S3A: expand optimisations on stores with 
"fs.s3a.performance.flags" for mkdir (#6543)


If the flag list in fs.s3a.performance.flags includes "mkdir"
then the safety check of a walk up the tree to look for a parent directory,
-done to verify a directory isn't being created under a file- are skipped.

This saves the cost of multiple list operations.

Contributed by Viraj Jasani
---
 .../filesystem/fsdataoutputstreambuilder.md|  4 +-
 .../hadoop/fs/FileContextCreateMkdirBaseTest.java  | 21 +++---
 .../fs/contract/AbstractContractMkdirTest.java |  7 +-
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java|  7 +-
 .../apache/hadoop/fs/s3a/impl/MkdirOperation.java  | 77 +-
 .../site/markdown/tools/hadoop-aws/performance.md  | 21 +-
 .../fs/contract/s3a/ITestS3AContractMkdir.java | 11 
 .../s3a/ITestS3AContractMkdirWithCreatePerf.java   | 75 +
 .../ITestS3AFileContextCreateMkdir.java|  9 ++-
 .../ITestS3AFileContextCreateMkdirCreatePerf.java  | 67 +++
 10 files changed, 265 insertions(+), 34 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md
index 5f24e755697..7dd3170036c 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdataoutputstreambuilder.md
@@ -200,8 +200,8 @@ Prioritize file creation performance over safety checks for 
filesystem consisten
 This:
 1. Skips the `LIST` call which makes sure a file is being created over a 
directory.
Risk: a file is created over a directory.
-1. Ignores the overwrite flag.
-1. Never issues a `DELETE` call to delete parent directory markers.
+2. Ignores the overwrite flag.
+3. Never issues a `DELETE` call to delete parent directory markers.
 
 It is possible to probe an S3A Filesystem instance for this capability through
 the `hasPathCapability(path, "fs.s3a.create.performance")` check.
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextCreateMkdirBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextCreateMkdirBaseTest.java
index fbd598c9deb..fcb1b6925a4 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextCreateMkdirBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextCreateMkdirBaseTest.java
@@ -27,6 +27,7 @@ import org.junit.Test;
 import static org.apache.hadoop.fs.FileContextTestHelper.*;
 import static 
org.apache.hadoop.fs.contract.ContractTestUtils.assertIsDirectory;
 import static org.apache.hadoop.fs.contract.ContractTestUtils.assertIsFile;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
 
 import org.apache.hadoop.test.GenericTestUtils;
 import org.slf4j.event.Level;
@@ -55,7 +56,10 @@ public abstract class FileContextCreateMkdirBaseTest {
 
   protected final FileContextTestHelper fileContextTestHelper;
   protected static FileContext fc;
-  
+
+  public static final String MKDIR_FILE_PRESENT_ERROR =
+  " should have failed as a file was present";
+
   static {
 GenericTestUtils.setLogLevel(FileSystem.LOG, Level.DEBUG);
   }
@@ -128,7 +132,7 @@ public abstract class FileContextCreateMkdirBaseTest {
   }
 
   @Test
-  public void testMkdirRecursiveWithExistingFile() throws IOException {
+  public void testMkdirRecursiveWithExistingFile() throws Exception {
 Path f = getTestRootPath(fc, "NonExistant3/aDir");
 fc.mkdir(f, FileContext.DEFAULT_PERM, true);
 assertIsDirectory(fc.getFileStatus(f));
@@ -141,13 +145,12 @@ public abstract class FileContextCreateMkdirBaseTest {
 
 // try creating another folder which conflicts with filePath
 Path dirPath = new Path(filePath, "bDir/cDir");
-try {
-  fc.mkdir(dirPath, FileContext.DEFAULT_PERM, true);
-  Assert.fail("Mkdir for " + dirPath
-  + " should have failed as a file was present");
-} catch(IOException e) {
-  // failed as expected
-}
+intercept(
+IOException.class,
+null,
+"Mkdir for &qu

(hadoop) branch branch-3.4 updated: HADOOP-19245. S3ABlockOutputStream no longer sends progress events in close() (#6974)

2024-08-02 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new b9b650b8437 HADOOP-19245. S3ABlockOutputStream no longer sends 
progress events in close() (#6974)
b9b650b8437 is described below

commit b9b650b8437e697b201fc16ac4dddb4e41c4acc1
Author: Steve Loughran 
AuthorDate: Fri Aug 2 16:01:03 2024 +0100

HADOOP-19245. S3ABlockOutputStream no longer sends progress events in 
close() (#6974)

Contributed by Steve Loughran
---
 .../main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java  | 3 ++-
 .../org/apache/hadoop/fs/s3a/performance/ITestCreateFileCost.java | 8 
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
index de0f59154e9..5fe39ac6ea3 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
@@ -1100,7 +1100,8 @@ class S3ABlockOutputStream extends OutputStream implements
   this.progress = progress;
 }
 
-public void progressChanged(ProgressListenerEvent eventType, int 
bytesTransferred) {
+@Override
+public void progressChanged(ProgressListenerEvent eventType, long 
bytesTransferred) {
   if (progress != null) {
 progress.progress();
   }
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestCreateFileCost.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestCreateFileCost.java
index c9a7415c181..5bd4bf412ff 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestCreateFileCost.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestCreateFileCost.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.fs.s3a.performance;
 import java.io.IOException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.concurrent.atomic.AtomicLong;
 
 import org.assertj.core.api.Assertions;
 import org.junit.Test;
@@ -213,8 +214,11 @@ public class ITestCreateFileCost extends 
AbstractS3ACostTest {
 S3AFileSystem fs = getFileSystem();
 
 Path path = methodPath();
+// increment progress events
+AtomicLong progressEvents = new AtomicLong(0);
 FSDataOutputStreamBuilder builder = fs.createFile(path)
 .overwrite(false)
+.progress(progressEvents::incrementAndGet)
 .recursive();
 
 // this has a broken return type; something to do with the return value of
@@ -225,6 +229,10 @@ public class ITestCreateFileCost extends 
AbstractS3ACostTest {
 always(NO_HEAD_OR_LIST),
 with(OBJECT_BULK_DELETE_REQUEST, 0),
 with(OBJECT_DELETE_REQUEST, 0));
+
+Assertions.assertThat(progressEvents.get())
+.describedAs("progress events")
+.isGreaterThanOrEqualTo(1);
   }
 
   @Test


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19245. S3ABlockOutputStream no longer sends progress events in close() (#6974)

2024-08-02 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2cf4d638af3 HADOOP-19245. S3ABlockOutputStream no longer sends 
progress events in close() (#6974)
2cf4d638af3 is described below

commit 2cf4d638af3520d60a892c94d39cf7a3a784f8f9
Author: Steve Loughran 
AuthorDate: Fri Aug 2 16:01:03 2024 +0100

HADOOP-19245. S3ABlockOutputStream no longer sends progress events in 
close() (#6974)


Contributed by Steve Loughran
---
 .../main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java  | 3 ++-
 .../org/apache/hadoop/fs/s3a/performance/ITestCreateFileCost.java | 8 
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
index de0f59154e9..5fe39ac6ea3 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java
@@ -1100,7 +1100,8 @@ class S3ABlockOutputStream extends OutputStream implements
   this.progress = progress;
 }
 
-public void progressChanged(ProgressListenerEvent eventType, int 
bytesTransferred) {
+@Override
+public void progressChanged(ProgressListenerEvent eventType, long 
bytesTransferred) {
   if (progress != null) {
 progress.progress();
   }
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestCreateFileCost.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestCreateFileCost.java
index c9a7415c181..5bd4bf412ff 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestCreateFileCost.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestCreateFileCost.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.fs.s3a.performance;
 import java.io.IOException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.concurrent.atomic.AtomicLong;
 
 import org.assertj.core.api.Assertions;
 import org.junit.Test;
@@ -213,8 +214,11 @@ public class ITestCreateFileCost extends 
AbstractS3ACostTest {
 S3AFileSystem fs = getFileSystem();
 
 Path path = methodPath();
+// increment progress events
+AtomicLong progressEvents = new AtomicLong(0);
 FSDataOutputStreamBuilder builder = fs.createFile(path)
 .overwrite(false)
+.progress(progressEvents::incrementAndGet)
 .recursive();
 
 // this has a broken return type; something to do with the return value of
@@ -225,6 +229,10 @@ public class ITestCreateFileCost extends 
AbstractS3ACostTest {
 always(NO_HEAD_OR_LIST),
 with(OBJECT_BULK_DELETE_REQUEST, 0),
 with(OBJECT_DELETE_REQUEST, 0));
+
+Assertions.assertThat(progressEvents.get())
+.describedAs("progress events")
+.isGreaterThanOrEqualTo(1);
   }
 
   @Test


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19237. Upgrade to dnsjava 3.6.1 due to CVEs (#6961)

2024-08-01 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c593c17255c HADOOP-19237. Upgrade to dnsjava 3.6.1 due to CVEs (#6961)
c593c17255c is described below

commit c593c17255c06a32b01055e2f4bb2394009bd94a
Author: PJ Fanning 
AuthorDate: Thu Aug 1 20:07:36 2024 +0100

HADOOP-19237. Upgrade to dnsjava 3.6.1 due to CVEs (#6961)


Contributed by P J Fanning
---
 LICENSE-binary| 2 +-
 .../src/test/resources/ensure-jars-have-correct-contents.sh   | 2 ++
 hadoop-client-modules/hadoop-client-runtime/pom.xml   | 3 +++
 .../java/org/apache/hadoop/registry/server/dns/RegistryDNS.java   | 2 +-
 .../org/apache/hadoop/registry/server/dns/TestRegistryDNS.java| 8 
 hadoop-project/pom.xml| 2 +-
 6 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index ff8012096a4..c0eb82f3dab 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -379,7 +379,7 @@ 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/compat/{fstatat|openat|unlinkat}.h
 
 com.github.luben:zstd-jni:1.5.2-1
-dnsjava:dnsjava:2.1.7
+dnsjava:dnsjava:3.6.1
 org.codehaus.woodstox:stax2-api:4.2.1
 
 
diff --git 
a/hadoop-client-modules/hadoop-client-check-invariants/src/test/resources/ensure-jars-have-correct-contents.sh
 
b/hadoop-client-modules/hadoop-client-check-invariants/src/test/resources/ensure-jars-have-correct-contents.sh
index 2e927402d25..3a7c5ce7860 100644
--- 
a/hadoop-client-modules/hadoop-client-check-invariants/src/test/resources/ensure-jars-have-correct-contents.sh
+++ 
b/hadoop-client-modules/hadoop-client-check-invariants/src/test/resources/ensure-jars-have-correct-contents.sh
@@ -51,6 +51,8 @@ allowed_expr+="|^[^-]*-default.xml$"
 allowed_expr+="|^[^-]*-version-info.properties$"
 #   * Hadoop's application classloader properties file.
 allowed_expr+="|^org.apache.hadoop.application-classloader.properties$"
+# Comes from dnsjava, not sure if relocatable.
+allowed_expr+="|^messages.properties$"
 # public suffix list used by httpcomponents
 allowed_expr+="|^mozilla/$"
 allowed_expr+="|^mozilla/public-suffix-list.txt$"
diff --git a/hadoop-client-modules/hadoop-client-runtime/pom.xml 
b/hadoop-client-modules/hadoop-client-runtime/pom.xml
index 22c8ae00a3a..8c72f53e918 100644
--- a/hadoop-client-modules/hadoop-client-runtime/pom.xml
+++ b/hadoop-client-modules/hadoop-client-runtime/pom.xml
@@ -229,6 +229,8 @@
 jnamed*
 lookup*
 update*
+META-INF/versions/21/*
+META-INF/versions/21/**/*
   
 
 
@@ -243,6 +245,7 @@
   
 
META-INF/versions/9/module-info.class
 
META-INF/versions/11/module-info.class
+
META-INF/versions/21/module-info.class
   
 
 
diff --git 
a/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/server/dns/RegistryDNS.java
 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/server/dns/RegistryDNS.java
index b6de757fc3c..e99c49f7dc6 100644
--- 
a/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/server/dns/RegistryDNS.java
+++ 
b/hadoop-common-project/hadoop-registry/src/main/java/org/apache/hadoop/registry/server/dns/RegistryDNS.java
@@ -1682,7 +1682,7 @@ public class RegistryDNS extends AbstractService 
implements DNSOperations,
   DNSSEC.sign(rRset, dnskeyRecord, privateKey,
   inception, expiration);
   LOG.info("Adding {}", rrsigRecord);
-  rRset.addRR(rrsigRecord);
+  zone.addRecord(rrsigRecord);
 
   //addDSRecord(zone, record.getName(), record.getDClass(),
   //  record.getTTL(), inception, expiration);
diff --git 
a/hadoop-common-project/hadoop-registry/src/test/java/org/apache/hadoop/registry/server/dns/TestRegistryDNS.java
 
b/hadoop-common-project/hadoop-registry/src/test/java/org/apache/hadoop/registry/server/dns/TestRegistryDNS.java
index 56e617144ad..386cb3a196c 100644
--- 
a/hadoop-common-project/hadoop-registry/src/test/java/org/apache/hadoop/registry/server/dns/TestRegistryDNS.java
+++ 
b/hadoop-common-project/hadoop-registry/src/test/java/org/apache/hadoop/registry/server/dns/TestRegistryDNS.java
@@ -350,7 +350,7 @@ public class TestRe

(hadoop) branch trunk updated (a5806a9e7bc -> 038636a1b52)

2024-07-29 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from a5806a9e7bc HADOOP-19161. S3A: option "fs.s3a.performance.flags" to 
take list of performance flags (#6789)
 add 038636a1b52 HADOOP-19238. Fix create-release script for arm64 based 
MacOS (#6962)

No new revisions were added by this update.

Summary of changes:
 dev-support/bin/create-release | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19161. S3A: option "fs.s3a.performance.flags" to take list of performance flags (#6789) (#6966)

2024-07-29 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 414c4a2529a HADOOP-19161. S3A: option "fs.s3a.performance.flags" to 
take list of performance flags (#6789) (#6966)
414c4a2529a is described below

commit 414c4a2529a6725888eea2b6740ca7bbbc8a6f1d
Author: Steve Loughran 
AuthorDate: Mon Jul 29 16:06:30 2024 +0100

HADOOP-19161. S3A: option "fs.s3a.performance.flags" to take list of 
performance flags (#6789) (#6966)


1. Configuration adds new method `getEnumSet()` to get a set of enum values 
from
   a configuration string.
   > EnumSet getEnumSet(String key, Class 
enumClass, boolean ignoreUnknown)

   Whitespace is ignored, case is ignored and the value "*" is mapped to 
"all values of the enum".
   If "ignoreUnknown" is true then when parsing, unknown values are ignored.
   This is recommended for forward compatiblity with later versions.

2. This support is implemented in 
org.apache.hadoop.fs.s3a.impl.ConfigurationHelper -it can be used
elsewhere in the hadoop codebase.

3. A new private FlagSet class in hadoop common manages a set of enum flags.

 It implements StreamCapabilities and can be probed for a specific 
option being set
(with a prefix)


S3A adds an option fs.s3a.performance.flags which builds a FlagSet with enum
type PerformanceFlagEnum

* which initially contains {Create, Delete, Mkdir, Open}
* the existing fs.s3a.create.performance option sets the flag "Create".
* tests which configure fs.s3a.create.performance MUST clear
  fs.s3a.performance.flags in test setup.

Future performance flags are planned, with different levels of safety
and/or backwards compatibility.

Contributed by Steve Loughran
---
 .../java/org/apache/hadoop/conf/Configuration.java |  22 ++
 .../java/org/apache/hadoop/fs/impl/FlagSet.java| 327 
 .../apache/hadoop/util/ConfigurationHelper.java| 126 ++
 .../org/apache/hadoop/fs/impl/TestFlagSet.java | 431 +
 .../hadoop/util/TestConfigurationHelper.java   | 174 +
 .../java/org/apache/hadoop/fs/s3a/Constants.java   |   5 +
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java|  82 +++-
 .../hadoop/fs/s3a/api/PerformanceFlagEnum.java |  51 +++
 .../apache/hadoop/fs/s3a/impl/StoreContext.java|  19 +-
 .../hadoop/fs/s3a/impl/StoreContextBuilder.java|  17 +-
 .../apache/hadoop/fs/s3a/s3guard/S3GuardTool.java  |   9 +-
 .../site/markdown/tools/hadoop-aws/performance.md  | 110 --
 .../fs/contract/s3a/ITestS3AContractCreate.java|   4 +-
 .../hadoop/fs/s3a/ITestS3AFileOperationCost.java   |   5 +-
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java |   5 +
 .../fs/s3a/impl/ITestConnectionTimeouts.java   |   4 +-
 .../fs/s3a/performance/AbstractS3ACostTest.java|   3 +-
 .../fs/s3a/performance/ITestCreateFileCost.java|   4 +-
 .../performance/ITestDirectoryMarkerListing.java   |   4 +-
 .../fs/s3a/performance/ITestS3ADeleteCost.java |   5 +-
 .../fs/s3a/tools/AbstractMarkerToolTest.java   |   3 +-
 21 files changed, 1350 insertions(+), 60 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index ea3d6dc74e4..44579b90337 100755
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -49,6 +49,7 @@ import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.EnumSet;
 import java.util.Enumeration;
 import java.util.HashMap;
 import java.util.HashSet;
@@ -99,6 +100,7 @@ import org.apache.hadoop.security.alias.CredentialProvider;
 import org.apache.hadoop.security.alias.CredentialProvider.CredentialEntry;
 import org.apache.hadoop.security.alias.CredentialProviderFactory;
 import org.apache.hadoop.thirdparty.com.google.common.base.Strings;
+import org.apache.hadoop.util.ConfigurationHelper;
 import org.apache.hadoop.util.Preconditions;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.StringInterner;
@@ -1786,6 +1788,26 @@ public class Configuration implements 
Iterable>,
   : Enum.valueOf(defaultValue.getDeclaringClass(), val);
   }
 
+  /**
+   * Build an enumset from a comma separated list of values.
+   * Case independent.
+   * Special handling of "*" meaning: all values.
+   * @param key key to look for
+   * @param enumClass class of enum
+   * @param ign

(hadoop) branch trunk updated: HADOOP-19161. S3A: option "fs.s3a.performance.flags" to take list of performance flags (#6789)

2024-07-29 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a5806a9e7bc HADOOP-19161. S3A: option "fs.s3a.performance.flags" to 
take list of performance flags (#6789)
a5806a9e7bc is described below

commit a5806a9e7bc6d018de84e6511f10c359f110f78c
Author: Steve Loughran 
AuthorDate: Mon Jul 29 11:33:51 2024 +0100

HADOOP-19161. S3A: option "fs.s3a.performance.flags" to take list of 
performance flags (#6789)



1. Configuration adds new method `getEnumSet()` to get a set of enum values 
from
   a configuration string.
   > EnumSet getEnumSet(String key, Class 
enumClass, boolean ignoreUnknown)

   Whitespace is ignored, case is ignored and the value "*" is mapped to 
"all values of the enum".
   If "ignoreUnknown" is true then when parsing, unknown values are ignored.
   This is recommended for forward compatiblity with later versions.

2. This support is implemented in 
org.apache.hadoop.fs.s3a.impl.ConfigurationHelper -it can be used
elsewhere in the hadoop codebase.

3. A new private FlagSet class in hadoop common manages a set of enum flags.

 It implements StreamCapabilities and can be probed for a specific 
option being set
(with a prefix)


S3A adds an option fs.s3a.performance.flags which builds a FlagSet with enum
type PerformanceFlagEnum

* which initially contains {Create, Delete, Mkdir, Open}
* the existing fs.s3a.create.performance option sets the flag "Create".
* tests which configure fs.s3a.create.performance MUST clear
  fs.s3a.performance.flags in test setup.

Future performance flags are planned, with different levels of safety
and/or backwards compatibility.

Contributed by Steve Loughran
---
 .../java/org/apache/hadoop/conf/Configuration.java |  22 ++
 .../java/org/apache/hadoop/fs/impl/FlagSet.java| 327 
 .../apache/hadoop/util/ConfigurationHelper.java| 126 ++
 .../org/apache/hadoop/fs/impl/TestFlagSet.java | 431 +
 .../hadoop/util/TestConfigurationHelper.java   | 174 +
 .../java/org/apache/hadoop/fs/s3a/Constants.java   |   5 +
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java|  82 +++-
 .../hadoop/fs/s3a/api/PerformanceFlagEnum.java |  51 +++
 .../apache/hadoop/fs/s3a/impl/StoreContext.java|  19 +-
 .../hadoop/fs/s3a/impl/StoreContextBuilder.java|  17 +-
 .../apache/hadoop/fs/s3a/s3guard/S3GuardTool.java  |   9 +-
 .../site/markdown/tools/hadoop-aws/performance.md  | 110 --
 .../fs/contract/s3a/ITestS3AContractCreate.java|   4 +-
 .../hadoop/fs/s3a/ITestS3AFileOperationCost.java   |   5 +-
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java |   5 +
 .../fs/s3a/impl/ITestConnectionTimeouts.java   |   4 +-
 .../fs/s3a/performance/AbstractS3ACostTest.java|   3 +-
 .../fs/s3a/performance/ITestCreateFileCost.java|   4 +-
 .../performance/ITestDirectoryMarkerListing.java   |   4 +-
 .../fs/s3a/performance/ITestS3ADeleteCost.java |   5 +-
 .../fs/s3a/tools/AbstractMarkerToolTest.java   |   3 +-
 21 files changed, 1350 insertions(+), 60 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index 8fc3a696c4a..94285a4dfb7 100755
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -49,6 +49,7 @@ import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.Collections;
+import java.util.EnumSet;
 import java.util.Enumeration;
 import java.util.HashMap;
 import java.util.HashSet;
@@ -99,6 +100,7 @@ import org.apache.hadoop.security.alias.CredentialProvider;
 import org.apache.hadoop.security.alias.CredentialProvider.CredentialEntry;
 import org.apache.hadoop.security.alias.CredentialProviderFactory;
 import org.apache.hadoop.thirdparty.com.google.common.base.Strings;
+import org.apache.hadoop.util.ConfigurationHelper;
 import org.apache.hadoop.util.Preconditions;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.StringInterner;
@@ -1786,6 +1788,26 @@ public class Configuration implements 
Iterable>,
   : Enum.valueOf(defaultValue.getDeclaringClass(), val);
   }
 
+  /**
+   * Build an enumset from a comma separated list of values.
+   * Case independent.
+   * Special handling of "*" meaning: all values.
+   * @param key key to look for
+   * @param enumClass class of enum
+   * @param ignoreUnknown should unkn

(hadoop) branch trunk updated (e2a0dca43b5 -> 4525c7e35ea)

2024-07-23 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from e2a0dca43b5 HDFS-16690. Automatically format unformatted JNs with 
JournalNodeSyncer (#6925). Contributed by Aswin M Prabhu.
 add 4525c7e35ea HADOOP-19197. S3A: Support AWS KMS Encryption Context 
(#6874)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/fs/CommonConfigurationKeysPublic.java   |   1 +
 .../src/main/resources/core-default.xml|  10 ++
 .../java/org/apache/hadoop/fs/s3a/Constants.java   |  10 ++
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java|  22 -
 .../delegation/EncryptionSecretOperations.java |  16 
 .../fs/s3a/auth/delegation/EncryptionSecrets.java  |  35 ++-
 .../hadoop/fs/s3a/impl/RequestFactoryImpl.java |  14 +++
 .../apache/hadoop/fs/s3a/impl/S3AEncryption.java   | 106 +
 .../site/markdown/tools/hadoop-aws/encryption.md   |  30 ++
 .../src/site/markdown/tools/hadoop-aws/index.md|  14 +++
 .../hadoop/fs/s3a/AbstractTestS3AEncryption.java   |   2 +
 ...stS3AEncryptionSSEKMSWithEncryptionContext.java | 101 
 .../apache/hadoop/fs/s3a/TestSSEConfiguration.java |  69 +++---
 .../fs/s3a/auth/TestMarshalledCredentials.java |   3 +-
 .../delegation/ITestSessionDelegationTokens.java   |   6 +-
 .../delegation/TestS3ADelegationTokenSupport.java  |  24 -
 .../hadoop/fs/s3a/impl/TestRequestFactory.java |   2 +-
 .../hadoop/fs/s3a/impl/TestS3AEncryption.java  |  77 +++
 18 files changed, 513 insertions(+), 29 deletions(-)
 create mode 100644 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/S3AEncryption.java
 create mode 100644 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionSSEKMSWithEncryptionContext.java
 create mode 100644 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestS3AEncryption.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19195. S3A: Upgrade aws sdk v2 to 2.25.53 (#6900)

2024-07-08 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new fc86a52c884 HADOOP-19195. S3A: Upgrade aws sdk v2 to 2.25.53 (#6900)
fc86a52c884 is described below

commit fc86a52c884f15b2f2fb401bbf0baaa36a057651
Author: HarshitGupta11 <50410275+harshitgupt...@users.noreply.github.com>
AuthorDate: Mon Jul 8 14:48:53 2024 +0530

HADOOP-19195. S3A: Upgrade aws sdk v2 to 2.25.53 (#6900)

Contributed by Harshit Gupta
---
 LICENSE-binary | 2 +-
 hadoop-project/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 3ab3ef5d5e2..45616936bb6 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -361,7 +361,7 @@ org.objenesis:objenesis:2.6
 org.xerial.snappy:snappy-java:1.1.10.4
 org.yaml:snakeyaml:2.0
 org.wildfly.openssl:wildfly-openssl:1.1.3.Final
-software.amazon.awssdk:bundle:jar:2.24.6
+software.amazon.awssdk:bundle:jar:2.25.53
 
 
 

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 4e42e3c895e..9b9176a029f 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -187,7 +187,7 @@
 1.0-beta-1
 900
 1.12.720
-2.24.6
+2.25.53
 1.0.1
 2.7.1
 1.11.2


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated (8ca4627a0da -> b1d96f6101c)

2024-07-08 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 8ca4627a0da HDFS-17557. Fix bug for 
TestRedundancyMonitor#testChooseTargetWhenAllDataNodesStop (#6897). Contributed 
by Haiyang Hu.
 add b1d96f6101c HADOOP-19195. S3A: Upgrade aws sdk v2 to 2.25.53 (#6900)

No new revisions were added by this update.

Summary of changes:
 LICENSE-binary | 2 +-
 hadoop-project/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19205. S3A: initialization/close slower than with v1 SDK (#6892)

2024-07-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new b7630e2b36b HADOOP-19205. S3A: initialization/close slower than with 
v1 SDK (#6892)
b7630e2b36b is described below

commit b7630e2b36b95cc4ea118f0961ad61e95018a05c
Author: Steve Loughran 
AuthorDate: Fri Jul 5 16:38:37 2024 +0100

HADOOP-19205. S3A: initialization/close slower than with v1 SDK (#6892)

Adds new ClientManager interface/implementation which provides on-demand
creation of synchronous and asynchronous s3 clients, s3 transfer manager,
and in close() terminates these.

S3A FS is modified to
* Create a ClientManagerImpl instance and pass down to its S3Store.
* Use the same ClientManager interface against S3Store to demand-create
  the services.
* Only create the async client as part of the transfer manager creation,
  which will take place during the first rename() operation.
* Statistics on client creation count and duration are recorded.
+ Statistics on the time to initialize and shutdown the S3A FS are collected
  in IOStatistics for reporting.

Adds to hadoop common class
  LazyAtomicReference implements CallableRaisingIOE, Supplier
and subclass
  LazyAutoCloseableReference
extends LazyAtomicReference implements AutoCloseable

These evaluate the Supplier/CallableRaisingIOE they were
constructed with on the first (successful) read of the the value.
Any exception raised during this operation will be rethrown, and on future
evaluations the same operation retried.

These classes implement the Supplier and CallableRaisingIOE
interfaces so can actually be used for to implement lazy function evaluation
as Haskell and some other functional languages do.

LazyAutoCloseableReference is AutoCloseable; its close() method will
close the inner reference if it is set

This class is used in ClientManagerImpl for the lazy S3 Cliehnt creation
and closure.

Contributed by Steve Loughran.
---
 .../fs/statistics/FileSystemStatisticNames.java|  45 +++
 .../hadoop/fs/statistics/StoreStatisticNames.java  |   6 +
 .../hadoop/util/functional/FunctionalIO.java   |  23 +-
 .../apache/hadoop/util/functional/FutureIO.java|  50 +--
 .../util/functional/LazyAtomicReference.java   | 152 +
 .../functional/LazyAutoCloseableReference.java | 102 ++
 .../hadoop/util/functional/TestLazyReferences.java | 263 ++
 .../hadoop-aws/dev-support/findbugs-exclude.xml|   5 -
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java| 259 --
 .../java/org/apache/hadoop/fs/s3a/S3AStore.java|   9 +-
 .../org/apache/hadoop/fs/s3a/S3ClientFactory.java  |   2 -
 .../java/org/apache/hadoop/fs/s3a/Statistic.java   |  16 +
 .../apache/hadoop/fs/s3a/impl/ClientManager.java   |  50 +++
 .../hadoop/fs/s3a/impl/ClientManagerImpl.java  | 238 +
 .../apache/hadoop/fs/s3a/impl/S3AStoreBuilder.java |  21 +-
 .../apache/hadoop/fs/s3a/impl/S3AStoreImpl.java| 121 ---
 .../apache/hadoop/fs/s3a/MockS3AFileSystem.java|   7 +
 .../fs/s3a/commit/staging/StagingTestBase.java |  11 +-
 .../hadoop/fs/s3a/impl/TestClientManager.java  | 379 +
 .../hadoop/fs/s3a/test/StubS3ClientFactory.java| 122 +++
 20 files changed, 1664 insertions(+), 217 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/FileSystemStatisticNames.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/FileSystemStatisticNames.java
new file mode 100644
index 000..cd8df2f8536
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/FileSystemStatisticNames.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics;
+
+import org.apache.hadoop.classification.Interf

(hadoop) branch trunk updated: HADOOP-19205. S3A: initialization/close slower than with v1 SDK (#6892)

2024-07-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4c55adbb6bc HADOOP-19205. S3A: initialization/close slower than with 
v1 SDK (#6892)
4c55adbb6bc is described below

commit 4c55adbb6bc25fe76943535fd97cbd2b6d350e33
Author: Steve Loughran 
AuthorDate: Fri Jul 5 16:38:37 2024 +0100

HADOOP-19205. S3A: initialization/close slower than with v1 SDK (#6892)


Adds new ClientManager interface/implementation which provides on-demand
creation of synchronous and asynchronous s3 clients, s3 transfer manager,
and in close() terminates these.

S3A FS is modified to
* Create a ClientManagerImpl instance and pass down to its S3Store.
* Use the same ClientManager interface against S3Store to demand-create
  the services.
* Only create the async client as part of the transfer manager creation,
  which will take place during the first rename() operation.
* Statistics on client creation count and duration are recorded.
+ Statistics on the time to initialize and shutdown the S3A FS are collected
  in IOStatistics for reporting.

Adds to hadoop common class
  LazyAtomicReference implements CallableRaisingIOE, Supplier
and subclass
  LazyAutoCloseableReference
extends LazyAtomicReference implements AutoCloseable

These evaluate the Supplier/CallableRaisingIOE they were
constructed with on the first (successful) read of the the value.
Any exception raised during this operation will be rethrown, and on future
evaluations the same operation retried.

These classes implement the Supplier and CallableRaisingIOE
interfaces so can actually be used for to implement lazy function evaluation
as Haskell and some other functional languages do.

LazyAutoCloseableReference is AutoCloseable; its close() method will
close the inner reference if it is set

This class is used in ClientManagerImpl for the lazy S3 Cliehnt creation
and closure.

Contributed by Steve Loughran.
---
 .../fs/statistics/FileSystemStatisticNames.java|  45 +++
 .../hadoop/fs/statistics/StoreStatisticNames.java  |   6 +
 .../hadoop/util/functional/FunctionalIO.java   |  23 +-
 .../apache/hadoop/util/functional/FutureIO.java|  50 +--
 .../util/functional/LazyAtomicReference.java   | 152 +
 .../functional/LazyAutoCloseableReference.java | 102 ++
 .../hadoop/util/functional/TestLazyReferences.java | 263 ++
 .../hadoop-aws/dev-support/findbugs-exclude.xml|   5 -
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java| 259 --
 .../java/org/apache/hadoop/fs/s3a/S3AStore.java|   9 +-
 .../org/apache/hadoop/fs/s3a/S3ClientFactory.java  |   2 -
 .../java/org/apache/hadoop/fs/s3a/Statistic.java   |  16 +
 .../apache/hadoop/fs/s3a/impl/ClientManager.java   |  50 +++
 .../hadoop/fs/s3a/impl/ClientManagerImpl.java  | 238 +
 .../apache/hadoop/fs/s3a/impl/S3AStoreBuilder.java |  21 +-
 .../apache/hadoop/fs/s3a/impl/S3AStoreImpl.java| 121 ---
 .../apache/hadoop/fs/s3a/MockS3AFileSystem.java|   7 +
 .../fs/s3a/commit/staging/StagingTestBase.java |  11 +-
 .../hadoop/fs/s3a/impl/TestClientManager.java  | 379 +
 .../hadoop/fs/s3a/test/StubS3ClientFactory.java| 122 +++
 20 files changed, 1664 insertions(+), 217 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/FileSystemStatisticNames.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/FileSystemStatisticNames.java
new file mode 100644
index 000..cd8df2f8536
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/FileSystemStatisticNames.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.statistics;
+
+import org.apache.hadoop.classification.Interf

(hadoop) branch branch-3.4 updated: HADOOP-19210. S3A: Speed up some slow unit tests (#6907)

2024-07-02 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new b5e21f94950 HADOOP-19210. S3A: Speed up some slow unit tests (#6907)
b5e21f94950 is described below

commit b5e21f949508f6837502c6613d8a914b59cf6cde
Author: Steve Loughran 
AuthorDate: Tue Jul 2 11:34:45 2024 +0100

HADOOP-19210. S3A: Speed up some slow unit tests (#6907)

Speed up slow tests
* TestS3AAWSCredentialsProvider: decrease thread pool shutdown time
* TestS3AInputStreamRetry: reduce retry limit and intervals

Contributed by Steve Loughran
---
 .../test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java  | 9 +
 .../org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java  | 8 +---
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
index f43710cf25e..e76b3046048 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
@@ -80,6 +80,15 @@ public abstract class AbstractS3AMockTest {
 conf.setInt(ASYNC_DRAIN_THRESHOLD, Integer.MAX_VALUE);
 // set the region to avoid the getBucketLocation on FS init.
 conf.set(AWS_REGION, "eu-west-1");
+
+// tight retry logic as all failures are simulated
+final String interval = "1ms";
+final int limit = 3;
+conf.set(RETRY_THROTTLE_INTERVAL, interval);
+conf.setInt(RETRY_THROTTLE_LIMIT, limit);
+conf.set(RETRY_INTERVAL, interval);
+conf.setInt(RETRY_LIMIT, limit);
+
 return conf;
   }
 
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
index 0ffd7e75b18..d51bc954a63 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
@@ -86,6 +86,8 @@ public class TestS3AAWSCredentialsProvider extends 
AbstractS3ATestBase {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(TestS3AAWSCredentialsProvider.class);
 
+  public static final int TERMINATION_TIMEOUT = 3;
+
   @Test
   public void testProviderWrongClass() throws Exception {
 expectProviderInstantiationFailure(this.getClass(),
@@ -579,7 +581,7 @@ public class TestS3AAWSCredentialsProvider extends 
AbstractS3ATestBase {
 }
   }
 
-  private static final int CONCURRENT_THREADS = 10;
+  private static final int CONCURRENT_THREADS = 4;
 
   @Test
   public void testConcurrentAuthentication() throws Throwable {
@@ -619,7 +621,7 @@ public class TestS3AAWSCredentialsProvider extends 
AbstractS3ATestBase {
 "expectedSecret", credentials.secretAccessKey());
   }
 } finally {
-  pool.awaitTermination(10, TimeUnit.SECONDS);
+  pool.awaitTermination(TERMINATION_TIMEOUT, TimeUnit.SECONDS);
   pool.shutdown();
 }
 
@@ -685,7 +687,7 @@ public class TestS3AAWSCredentialsProvider extends 
AbstractS3ATestBase {
 );
   }
 } finally {
-  pool.awaitTermination(10, TimeUnit.SECONDS);
+  pool.awaitTermination(TERMINATION_TIMEOUT, TimeUnit.SECONDS);
   pool.shutdown();
 }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19210. S3A: Speed up some slow unit tests (#6907)

2024-07-02 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c33d8686060 HADOOP-19210. S3A: Speed up some slow unit tests (#6907)
c33d8686060 is described below

commit c33d86860606f972f8b743b02f629b14f83d14f2
Author: Steve Loughran 
AuthorDate: Tue Jul 2 11:34:45 2024 +0100

HADOOP-19210. S3A: Speed up some slow unit tests (#6907)



Speed up slow tests
* TestS3AAWSCredentialsProvider: decrease thread pool shutdown time
* TestS3AInputStreamRetry: reduce retry limit and intervals

Contributed by Steve Loughran
---
 .../test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java  | 9 +
 .../org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java  | 8 +---
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
index f43710cf25e..e76b3046048 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java
@@ -80,6 +80,15 @@ public abstract class AbstractS3AMockTest {
 conf.setInt(ASYNC_DRAIN_THRESHOLD, Integer.MAX_VALUE);
 // set the region to avoid the getBucketLocation on FS init.
 conf.set(AWS_REGION, "eu-west-1");
+
+// tight retry logic as all failures are simulated
+final String interval = "1ms";
+final int limit = 3;
+conf.set(RETRY_THROTTLE_INTERVAL, interval);
+conf.setInt(RETRY_THROTTLE_LIMIT, limit);
+conf.set(RETRY_INTERVAL, interval);
+conf.setInt(RETRY_LIMIT, limit);
+
 return conf;
   }
 
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
index 0ffd7e75b18..d51bc954a63 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AAWSCredentialsProvider.java
@@ -86,6 +86,8 @@ public class TestS3AAWSCredentialsProvider extends 
AbstractS3ATestBase {
 
   private static final Logger LOG = 
LoggerFactory.getLogger(TestS3AAWSCredentialsProvider.class);
 
+  public static final int TERMINATION_TIMEOUT = 3;
+
   @Test
   public void testProviderWrongClass() throws Exception {
 expectProviderInstantiationFailure(this.getClass(),
@@ -579,7 +581,7 @@ public class TestS3AAWSCredentialsProvider extends 
AbstractS3ATestBase {
 }
   }
 
-  private static final int CONCURRENT_THREADS = 10;
+  private static final int CONCURRENT_THREADS = 4;
 
   @Test
   public void testConcurrentAuthentication() throws Throwable {
@@ -619,7 +621,7 @@ public class TestS3AAWSCredentialsProvider extends 
AbstractS3ATestBase {
 "expectedSecret", credentials.secretAccessKey());
   }
 } finally {
-  pool.awaitTermination(10, TimeUnit.SECONDS);
+  pool.awaitTermination(TERMINATION_TIMEOUT, TimeUnit.SECONDS);
   pool.shutdown();
 }
 
@@ -685,7 +687,7 @@ public class TestS3AAWSCredentialsProvider extends 
AbstractS3ATestBase {
 );
   }
 } finally {
-  pool.awaitTermination(10, TimeUnit.SECONDS);
+  pool.awaitTermination(TERMINATION_TIMEOUT, TimeUnit.SECONDS);
   pool.shutdown();
 }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19194:Add test to find unshaded dependencies in the aws sdk (#6865)

2024-06-24 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 2b46ca4ba59 HADOOP-19194:Add test to find unshaded dependencies in the 
aws sdk (#6865)
2b46ca4ba59 is described below

commit 2b46ca4ba59e4a8d62a43ca23b104c30b615141f
Author: HarshitGupta11 <50410275+harshitgupt...@users.noreply.github.com>
AuthorDate: Mon Jun 24 15:11:11 2024 +0530

HADOOP-19194:Add test to find unshaded dependencies in the aws sdk (#6865)

The new test TestAWSV2SDK scans the aws sdk bundle.jar and prints out all 
classes
which are unshaded, so at risk of creating classpath problems

It does not fail the test if this holds, because the current SDKs
do ship with unshaded classes; the test would always fail.

The SDK upgrade process should include inspecting the output
of this test to see if it has got worse (do a before/after check).

Once the AWS SDK does shade everything, we can have this
test fail on any regression

Contributed by Harshit Gupta
---
 .../src/site/markdown/tools/hadoop-aws/testing.md  |  1 +
 .../org/apache/hadoop/fs/sdk/TestAWSV2SDK.java | 94 ++
 2 files changed, 95 insertions(+)

diff --git 
a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md 
b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
index 45d1c847657..7222eee98ba 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
@@ -1184,6 +1184,7 @@ your IDE or via maven.
 1. Run a full AWS-test suite with S3 client-side encryption enabled by
  setting `fs.s3a.encryption.algorithm` to 'CSE-KMS' and setting up AWS-KMS
   Key ID in `fs.s3a.encryption.key`.
+2. Verify that the output of test `TestAWSV2SDK` doesn't contain any unshaded 
classes.
 
 The dependency chain of the `hadoop-aws` module should be similar to this, 
albeit
 with different version numbers:
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/sdk/TestAWSV2SDK.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/sdk/TestAWSV2SDK.java
new file mode 100644
index 000..fca9fcc300c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/sdk/TestAWSV2SDK.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.sdk;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Enumeration;
+import java.util.List;
+import java.util.jar.JarEntry;
+import java.util.jar.JarFile;
+
+import org.junit.Test;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Tests to verify AWS SDK based issues like duplicated shaded classes and 
others.
+ */
+public class TestAWSV2SDK extends AbstractHadoopTestBase {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(TestAWSV2SDK.class.getName());
+
+  @Test
+  public void testShadedClasses() throws IOException {
+String allClassPath = System.getProperty("java.class.path");
+LOG.debug("Current classpath:{}", allClassPath);
+String[] classPaths = allClassPath.split(File.pathSeparator);
+String v2ClassPath = null;
+for (String classPath : classPaths) {
+  //Checking for only version 2.x sdk here
+  if (classPath.contains("awssdk/bundle/2")) {
+v2ClassPath = classPath;
+break;
+  }
+}
+LOG.debug("AWS SDK V2 Classpath:{}", v2ClassPath);
+assertThat(v2ClassPath)
+.as("AWS V2 SDK should be present on the classpath").isNotNull();
+List listOfV2SdkClasses = getClassNamesFromJarFile(v2ClassPath);
+String awsSdkPrefix = "software/amazon/awssdk";
+List unshadedClasses = new ArrayList<&

(hadoop) branch trunk updated: HADOOP-19194:Add test to find unshaded dependencies in the aws sdk (#6865)

2024-06-24 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d3b98cb1b23 HADOOP-19194:Add test to find unshaded dependencies in the 
aws sdk (#6865)
d3b98cb1b23 is described below

commit d3b98cb1b23841a57b966c5cedab312687f098cb
Author: HarshitGupta11 <50410275+harshitgupt...@users.noreply.github.com>
AuthorDate: Mon Jun 24 15:11:11 2024 +0530

HADOOP-19194:Add test to find unshaded dependencies in the aws sdk (#6865)


The new test TestAWSV2SDK scans the aws sdk bundle.jar and prints out all 
classes
which are unshaded, so at risk of creating classpath problems

It does not fail the test if this holds, because the current SDKs
do ship with unshaded classes; the test would always fail.

The SDK upgrade process should include inspecting the output
of this test to see if it has got worse (do a before/after check).

Once the AWS SDK does shade everything, we can have this
test fail on any regression

Contributed by Harshit Gupta
---
 .../src/site/markdown/tools/hadoop-aws/testing.md  |  1 +
 .../org/apache/hadoop/fs/sdk/TestAWSV2SDK.java | 94 ++
 2 files changed, 95 insertions(+)

diff --git 
a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md 
b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
index 45d1c847657..7222eee98ba 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
@@ -1184,6 +1184,7 @@ your IDE or via maven.
 1. Run a full AWS-test suite with S3 client-side encryption enabled by
  setting `fs.s3a.encryption.algorithm` to 'CSE-KMS' and setting up AWS-KMS
   Key ID in `fs.s3a.encryption.key`.
+2. Verify that the output of test `TestAWSV2SDK` doesn't contain any unshaded 
classes.
 
 The dependency chain of the `hadoop-aws` module should be similar to this, 
albeit
 with different version numbers:
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/sdk/TestAWSV2SDK.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/sdk/TestAWSV2SDK.java
new file mode 100644
index 000..fca9fcc300c
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/sdk/TestAWSV2SDK.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.sdk;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Enumeration;
+import java.util.List;
+import java.util.jar.JarEntry;
+import java.util.jar.JarFile;
+
+import org.junit.Test;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/**
+ * Tests to verify AWS SDK based issues like duplicated shaded classes and 
others.
+ */
+public class TestAWSV2SDK extends AbstractHadoopTestBase {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(TestAWSV2SDK.class.getName());
+
+  @Test
+  public void testShadedClasses() throws IOException {
+String allClassPath = System.getProperty("java.class.path");
+LOG.debug("Current classpath:{}", allClassPath);
+String[] classPaths = allClassPath.split(File.pathSeparator);
+String v2ClassPath = null;
+for (String classPath : classPaths) {
+  //Checking for only version 2.x sdk here
+  if (classPath.contains("awssdk/bundle/2")) {
+v2ClassPath = classPath;
+break;
+  }
+}
+LOG.debug("AWS SDK V2 Classpath:{}", v2ClassPath);
+assertThat(v2ClassPath)
+.as("AWS V2 SDK should be present on the classpath").isNotNull();
+List listOfV2SdkClasses = getClassNamesFromJarFile(v2ClassPath);
+String awsSdkPrefix = "software/amazon/awssdk";
+List unshadedClasses = new ArrayList<>();
+for (String awsSdkCla

(hadoop) branch branch-3.4 updated: HADOOP-19203. WrappedIO BulkDelete API to raise IOEs as UncheckedIOExceptions (#6885)

2024-06-20 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new b8a390122fd HADOOP-19203. WrappedIO BulkDelete API to raise IOEs as 
UncheckedIOExceptions (#6885)
b8a390122fd is described below

commit b8a390122fd7a1c2d5baebeb60d7509bc22a1790
Author: Steve Loughran 
AuthorDate: Wed Jun 19 18:47:29 2024 +0100

HADOOP-19203. WrappedIO BulkDelete API to raise IOEs as 
UncheckedIOExceptions (#6885)

* WrappedIO methods raise UncheckedIOExceptions
*New class org.apache.hadoop.util.functional.FunctionalIO
 with wrap/unwrap and the ability to generate a
 java.util.function.Supplier around a CallableRaisingIOE.

Contributed by Steve Loughran
---
 .../org/apache/hadoop/io/wrappedio/WrappedIO.java  | 37 
 .../util/functional/CommonCallableSupplier.java|  5 +-
 .../hadoop/util/functional/FunctionalIO.java   | 99 ++
 .../hadoop/util/functional/TestFunctionalIO.java   | 97 +
 4 files changed, 221 insertions(+), 17 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
index 286557c2c37..d6fe311fba8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
@@ -18,7 +18,7 @@
 
 package org.apache.hadoop.io.wrappedio;
 
-import java.io.IOException;
+import java.io.UncheckedIOException;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
@@ -29,17 +29,19 @@ import org.apache.hadoop.fs.BulkDelete;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 
+import static 
org.apache.hadoop.util.functional.FunctionalIO.uncheckIOExceptions;
+
 /**
  * Reflection-friendly access to APIs which are not available in
  * some of the older Hadoop versions which libraries still
  * compile against.
  * 
  * The intent is to avoid the need for complex reflection operations
- * including wrapping of parameter classes, direct instatiation of
+ * including wrapping of parameter classes, direct instantiation of
  * new classes etc.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
+@InterfaceStability.Unstable
 public final class WrappedIO {
 
   private WrappedIO() {
@@ -52,12 +54,15 @@ public final class WrappedIO {
* @return a number greater than or equal to zero.
* @throws UnsupportedOperationException bulk delete under that path is not 
supported.
* @throws IllegalArgumentException path not valid.
-   * @throws IOException problems resolving paths
+   * @throws UncheckedIOException if an IOE was raised.
*/
-  public static int bulkDelete_pageSize(FileSystem fs, Path path) throws 
IOException {
-try (BulkDelete bulk = fs.createBulkDelete(path)) {
-  return bulk.pageSize();
-}
+  public static int bulkDelete_pageSize(FileSystem fs, Path path) {
+
+return uncheckIOExceptions(() -> {
+  try (BulkDelete bulk = fs.createBulkDelete(path)) {
+return bulk.pageSize();
+  }
+});
   }
 
   /**
@@ -79,15 +84,17 @@ public final class WrappedIO {
* @param paths list of paths which must be absolute and under the base path.
* @return a list of all the paths which couldn't be deleted for a reason 
other than "not found" and any associated error message.
* @throws UnsupportedOperationException bulk delete under that path is not 
supported.
-   * @throws IOException IO problems including networking, authentication and 
more.
+   * @throws UncheckedIOException if an IOE was raised.
* @throws IllegalArgumentException if a path argument is invalid.
*/
   public static List> bulkDelete_delete(FileSystem fs,
-Path base,
-
Collection paths)
-throws IOException {
-try (BulkDelete bulk = fs.createBulkDelete(base)) {
-  return bulk.bulkDelete(paths);
-}
+  Path base,
+  Collection paths) {
+
+return uncheckIOExceptions(() -> {
+  try (BulkDelete bulk = fs.createBulkDelete(base)) {
+return bulk.bulkDelete(paths);
+  }
+});
   }
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/CommonCallableSupplier.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/CommonCallableSupplier.java
index 67299ef96ae..7a3193efbf0 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/CommonCallableSupplier.java
+++ 
b/hadoop-comm

(hadoop) branch branch-3.4 updated: HADOOP-19204. VectorIO regression: empty ranges are now rejected (#6887)

2024-06-19 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 30aaad808ac HADOOP-19204. VectorIO regression: empty ranges are now 
rejected (#6887)
30aaad808ac is described below

commit 30aaad808acad193733293a6d39d1cb47e9a8e0b
Author: Steve Loughran 
AuthorDate: Wed Jun 19 12:05:24 2024 +0100

HADOOP-19204. VectorIO regression: empty ranges are now rejected (#6887)

- restore old outcome: no-op
- test this
- update spec

This is a critical fix for vector IO and MUST be cherrypicked to all 
branches with
that feature

Contributed by Steve Loughran
---
 .../src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java |  9 -
 .../src/site/markdown/filesystem/fsdatainputstream.md |  1 -
 .../hadoop/fs/contract/AbstractContractVectoredReadTest.java  | 11 +++
 .../java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java |  7 +++
 4 files changed, 22 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
index 493b8c3a33d..fa0440620a4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
@@ -294,7 +294,14 @@ public final class VectoredReadUtils {
   final Optional fileLength) throws EOFException {
 
 requireNonNull(input, "Null input list");
-checkArgument(!input.isEmpty(), "Empty input list");
+
+if (input.isEmpty()) {
+  // this may seem a pathological case, but it was valid
+  // before and somehow Spark can call it through parquet.
+  LOG.debug("Empty input list");
+  return input;
+}
+
 final List sortedRanges;
 
 if (input.size() == 1) {
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
index 6cbb54ea701..db844a94e39 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
@@ -474,7 +474,6 @@ No empty lists.
 
 ```python
 if ranges = null raise NullPointerException
-if ranges.len() = 0 raise IllegalArgumentException
 if allocate = null raise NullPointerException
 ```
 
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java
index d6a1fb1f0b7..aa478f3af63 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java
@@ -340,6 +340,17 @@ public abstract class AbstractContractVectoredReadTest 
extends AbstractFSContrac
 }
   }
 
+  @Test
+  public void testEmptyRanges() throws Exception {
+List fileRanges = new ArrayList<>();
+try (FSDataInputStream in = openVectorFile()) {
+  in.readVectored(fileRanges, allocate);
+  Assertions.assertThat(fileRanges)
+  .describedAs("Empty ranges must stay empty")
+  .isEmpty();
+}
+  }
+
   /**
* Test to validate EOF ranges.
* 
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
index 2a290058cae..3fd3fe4d1f4 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
@@ -702,12 +702,11 @@ public class TestVectoredReadUtils extends HadoopTestBase 
{
   }
 
   /**
-   * Empty ranges cannot be sorted.
+   * Empty ranges are allowed.
*/
   @Test
-  public void testEmptyRangesRaisesIllegalArgument() throws Throwable {
-intercept(IllegalArgumentException.class,
-() -> validateAndSortRanges(Collections.emptyList(), 
Optional.empty()));
+  public void testEmptyRangesAllowed() throws Throwable {
+validateAndSortRanges(Collections.emptyList(), Optional.empty());
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19203. WrappedIO BulkDelete API to raise IOEs as UncheckedIOExceptions (#6885)

2024-06-19 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 8ac9c1839ac HADOOP-19203. WrappedIO BulkDelete API to raise IOEs as 
UncheckedIOExceptions (#6885)
8ac9c1839ac is described below

commit 8ac9c1839acba61d73fb2c9109e3c5b8cefb33c0
Author: Steve Loughran 
AuthorDate: Wed Jun 19 18:47:29 2024 +0100

HADOOP-19203. WrappedIO BulkDelete API to raise IOEs as 
UncheckedIOExceptions (#6885)



* WrappedIO methods raise UncheckedIOExceptions
*New class org.apache.hadoop.util.functional.FunctionalIO
 with wrap/unwrap and the ability to generate a
 java.util.function.Supplier around a CallableRaisingIOE.

Contributed by Steve Loughran
---
 .../org/apache/hadoop/io/wrappedio/WrappedIO.java  | 37 
 .../util/functional/CommonCallableSupplier.java|  5 +-
 .../hadoop/util/functional/FunctionalIO.java   | 99 ++
 .../hadoop/util/functional/TestFunctionalIO.java   | 97 +
 4 files changed, 221 insertions(+), 17 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
index 286557c2c37..d6fe311fba8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
@@ -18,7 +18,7 @@
 
 package org.apache.hadoop.io.wrappedio;
 
-import java.io.IOException;
+import java.io.UncheckedIOException;
 import java.util.Collection;
 import java.util.List;
 import java.util.Map;
@@ -29,17 +29,19 @@ import org.apache.hadoop.fs.BulkDelete;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 
+import static 
org.apache.hadoop.util.functional.FunctionalIO.uncheckIOExceptions;
+
 /**
  * Reflection-friendly access to APIs which are not available in
  * some of the older Hadoop versions which libraries still
  * compile against.
  * 
  * The intent is to avoid the need for complex reflection operations
- * including wrapping of parameter classes, direct instatiation of
+ * including wrapping of parameter classes, direct instantiation of
  * new classes etc.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
+@InterfaceStability.Unstable
 public final class WrappedIO {
 
   private WrappedIO() {
@@ -52,12 +54,15 @@ public final class WrappedIO {
* @return a number greater than or equal to zero.
* @throws UnsupportedOperationException bulk delete under that path is not 
supported.
* @throws IllegalArgumentException path not valid.
-   * @throws IOException problems resolving paths
+   * @throws UncheckedIOException if an IOE was raised.
*/
-  public static int bulkDelete_pageSize(FileSystem fs, Path path) throws 
IOException {
-try (BulkDelete bulk = fs.createBulkDelete(path)) {
-  return bulk.pageSize();
-}
+  public static int bulkDelete_pageSize(FileSystem fs, Path path) {
+
+return uncheckIOExceptions(() -> {
+  try (BulkDelete bulk = fs.createBulkDelete(path)) {
+return bulk.pageSize();
+  }
+});
   }
 
   /**
@@ -79,15 +84,17 @@ public final class WrappedIO {
* @param paths list of paths which must be absolute and under the base path.
* @return a list of all the paths which couldn't be deleted for a reason 
other than "not found" and any associated error message.
* @throws UnsupportedOperationException bulk delete under that path is not 
supported.
-   * @throws IOException IO problems including networking, authentication and 
more.
+   * @throws UncheckedIOException if an IOE was raised.
* @throws IllegalArgumentException if a path argument is invalid.
*/
   public static List> bulkDelete_delete(FileSystem fs,
-Path base,
-
Collection paths)
-throws IOException {
-try (BulkDelete bulk = fs.createBulkDelete(base)) {
-  return bulk.bulkDelete(paths);
-}
+  Path base,
+  Collection paths) {
+
+return uncheckIOExceptions(() -> {
+  try (BulkDelete bulk = fs.createBulkDelete(base)) {
+return bulk.bulkDelete(paths);
+  }
+});
   }
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/CommonCallableSupplier.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/CommonCallableSupplier.java
index 67299ef96ae..7a3193efbf0 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/CommonCallableSupplier.java
+++ 
b/hadoop-comm

(hadoop) branch trunk updated: HADOOP-19204. VectorIO regression: empty ranges are now rejected (#6887)

2024-06-19 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 56c8aa5f1c4 HADOOP-19204. VectorIO regression: empty ranges are now 
rejected (#6887)
56c8aa5f1c4 is described below

commit 56c8aa5f1c4a0336f69083c742e2504ccc828d7d
Author: Steve Loughran 
AuthorDate: Wed Jun 19 12:05:24 2024 +0100

HADOOP-19204. VectorIO regression: empty ranges are now rejected (#6887)



- restore old outcome: no-op
- test this
- update spec

This is a critical fix for vector IO and MUST be cherrypicked to all 
branches with
that feature

Contributed by Steve Loughran
---
 .../src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java |  9 -
 .../src/site/markdown/filesystem/fsdatainputstream.md |  1 -
 .../hadoop/fs/contract/AbstractContractVectoredReadTest.java  | 11 +++
 .../java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java |  7 +++
 4 files changed, 22 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
index 493b8c3a33d..fa0440620a4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java
@@ -294,7 +294,14 @@ public final class VectoredReadUtils {
   final Optional fileLength) throws EOFException {
 
 requireNonNull(input, "Null input list");
-checkArgument(!input.isEmpty(), "Empty input list");
+
+if (input.isEmpty()) {
+  // this may seem a pathological case, but it was valid
+  // before and somehow Spark can call it through parquet.
+  LOG.debug("Empty input list");
+  return input;
+}
+
 final List sortedRanges;
 
 if (input.size() == 1) {
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
index 6cbb54ea701..db844a94e39 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md
@@ -474,7 +474,6 @@ No empty lists.
 
 ```python
 if ranges = null raise NullPointerException
-if ranges.len() = 0 raise IllegalArgumentException
 if allocate = null raise NullPointerException
 ```
 
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java
index d6a1fb1f0b7..aa478f3af63 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java
@@ -340,6 +340,17 @@ public abstract class AbstractContractVectoredReadTest 
extends AbstractFSContrac
 }
   }
 
+  @Test
+  public void testEmptyRanges() throws Exception {
+List fileRanges = new ArrayList<>();
+try (FSDataInputStream in = openVectorFile()) {
+  in.readVectored(fileRanges, allocate);
+  Assertions.assertThat(fileRanges)
+  .describedAs("Empty ranges must stay empty")
+  .isEmpty();
+}
+  }
+
   /**
* Test to validate EOF ranges.
* 
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
index 2a290058cae..3fd3fe4d1f4 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestVectoredReadUtils.java
@@ -702,12 +702,11 @@ public class TestVectoredReadUtils extends HadoopTestBase 
{
   }
 
   /**
-   * Empty ranges cannot be sorted.
+   * Empty ranges are allowed.
*/
   @Test
-  public void testEmptyRangesRaisesIllegalArgument() throws Throwable {
-intercept(IllegalArgumentException.class,
-() -> validateAndSortRanges(Collections.emptyList(), 
Optional.empty()));
+  public void testEmptyRangesAllowed() throws Throwable {
+validateAndSortRanges(Collections.emptyList(), Optional.empty());
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-18508. S3A: Support parallel integration test runs on same bucket (#5081)

2024-06-14 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 3173b72aeaa HADOOP-18508. S3A: Support parallel integration test runs 
on same bucket (#5081)
3173b72aeaa is described below

commit 3173b72aeaa13c074f16667af7391f8c1d14bb4f
Author: Steve Loughran 
AuthorDate: Fri Jun 14 19:34:52 2024 +0100

HADOOP-18508. S3A: Support parallel integration test runs on same bucket 
(#5081)

It is now possible to provide a job ID in the maven "job.id" property
hadoop-aws test runs to isolate paths under a the test bucket
under which all tests will be executed.

This will allow independent builds *in different source trees*
to test against the same bucket in parallel, and is designed for
CI testing.

Example:

mvn verify -Dparallel-tests -Droot.tests.enabled=false -Djob.id=1
mvn verify -Droot.tests.enabled=false -Djob.id=2

- Root tests must be be disabled to stop them cleaning up
  the test paths of other test runs.
- Do still regularly run the root tests just to force cleanup
  of the output of any interrupted test suites.

Contributed by Steve Loughran
---
 .../apache/hadoop/fs/FSMainOperationsBaseTest.java |  6 +-
 .../fs/FileContextMainOperationsBaseTest.java  |  8 +-
 .../fs/TestFSMainOperationsLocalFileSystem.java| 26 +-
 .../TestFSMainOperationsLocalFileSystem.java   |  9 ---
 hadoop-tools/hadoop-aws/pom.xml| 27 ---
 .../src/site/markdown/tools/hadoop-aws/testing.md  | 49 +++-
 .../fs/contract/s3a/ITestS3AContractRootDir.java   |  8 ++
 .../hadoop/fs/s3a/ITestS3AConfiguration.java   |  6 +-
 .../hadoop/fs/s3a/ITestS3AEncryptionSSEC.java  | 92 +-
 .../org/apache/hadoop/fs/s3a/S3ATestConstants.java | 12 +++
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java | 12 ++-
 .../fs/s3a/commit/terasort/ITestTerasortOnS3A.java | 22 --
 .../ITestS3AFileContextMainOperations.java | 30 +--
 .../fs/s3a/scale/AbstractSTestS3AHugeFiles.java| 10 +++
 .../hadoop/fs/s3a/scale/ITestS3AConcurrentOps.java |  5 +-
 .../hadoop/fs/s3a/scale/S3AScaleTestBase.java  |  2 +-
 .../s3a/tools/ITestMarkerToolRootOperations.java   |  2 +
 .../org/apache/hadoop/fs/s3a/yarn/ITestS3A.java| 41 +++---
 18 files changed, 197 insertions(+), 170 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java
index f0c00c4cdee..07f0e816193 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java
@@ -102,7 +102,9 @@ public abstract class FSMainOperationsBaseTest extends 
FileSystemTestHelper {
   
   @After
   public void tearDown() throws Exception {
-fSys.delete(new Path(getAbsoluteTestRootPath(fSys), new Path("test")), 
true);
+if (fSys != null) {
+  fSys.delete(new Path(getAbsoluteTestRootPath(fSys), new Path("test")), 
true);
+}
   }
   
   
@@ -192,7 +194,7 @@ public abstract class FSMainOperationsBaseTest extends 
FileSystemTestHelper {
   
   @Test
   public void testWDAbsolute() throws IOException {
-Path absoluteDir = new Path(fSys.getUri() + "/test/existingDir");
+Path absoluteDir = getTestRootPath(fSys, "test/existingDir");
 fSys.mkdirs(absoluteDir);
 fSys.setWorkingDirectory(absoluteDir);
 Assert.assertEquals(absoluteDir, fSys.getWorkingDirectory());
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
index 4c90490b090..6897a0d1943 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
@@ -81,6 +81,12 @@ public abstract class FileContextMainOperationsBaseTest  {
   protected final FileContextTestHelper fileContextTestHelper =
 createFileContextHelper();
 
+  /**
+   * Create the test helper.
+   * Important: this is invoked during the construction of the base class,
+   * so is very brittle.
+   * @return a test helper.
+   */
   protected FileContextTestHelper createFileContextHelper() {
 return new FileContextTestHelper();
   }
@@ -107,7 +113,7 @@ public abstract class FileContextMainOperationsBaseTest  {
   
   private static final byte[] data = getFileData(numBlock

(hadoop) branch trunk updated: HADOOP-18508. S3A: Support parallel integration test runs on same bucket (#5081)

2024-06-14 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2d5fa9e016d HADOOP-18508. S3A: Support parallel integration test runs 
on same bucket (#5081)
2d5fa9e016d is described below

commit 2d5fa9e016d0c16dcd472c6bac18420c6d97118a
Author: Steve Loughran 
AuthorDate: Fri Jun 14 19:34:52 2024 +0100

HADOOP-18508. S3A: Support parallel integration test runs on same bucket 
(#5081)


It is now possible to provide a job ID in the maven "job.id" property
hadoop-aws test runs to isolate paths under a the test bucket
under which all tests will be executed.

This will allow independent builds *in different source trees*
to test against the same bucket in parallel, and is designed for
CI testing.

Example:

mvn verify -Dparallel-tests -Droot.tests.enabled=false -Djob.id=1
mvn verify -Droot.tests.enabled=false -Djob.id=2

- Root tests must be be disabled to stop them cleaning up
  the test paths of other test runs.
- Do still regularly run the root tests just to force cleanup
  of the output of any interrupted test suites.

Contributed by Steve Loughran
---
 .../apache/hadoop/fs/FSMainOperationsBaseTest.java |  6 +-
 .../fs/FileContextMainOperationsBaseTest.java  |  8 +-
 .../fs/TestFSMainOperationsLocalFileSystem.java| 26 +-
 .../TestFSMainOperationsLocalFileSystem.java   |  9 ---
 hadoop-tools/hadoop-aws/pom.xml| 27 ---
 .../src/site/markdown/tools/hadoop-aws/testing.md  | 49 +++-
 .../fs/contract/s3a/ITestS3AContractRootDir.java   |  8 ++
 .../hadoop/fs/s3a/ITestS3AConfiguration.java   |  6 +-
 .../hadoop/fs/s3a/ITestS3AEncryptionSSEC.java  | 92 +-
 .../org/apache/hadoop/fs/s3a/S3ATestConstants.java | 12 +++
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java | 12 ++-
 .../fs/s3a/commit/terasort/ITestTerasortOnS3A.java | 22 --
 .../ITestS3AFileContextMainOperations.java | 30 +--
 .../fs/s3a/scale/AbstractSTestS3AHugeFiles.java| 10 +++
 .../hadoop/fs/s3a/scale/ITestS3AConcurrentOps.java |  5 +-
 .../hadoop/fs/s3a/scale/S3AScaleTestBase.java  |  2 +-
 .../s3a/tools/ITestMarkerToolRootOperations.java   |  2 +
 .../org/apache/hadoop/fs/s3a/yarn/ITestS3A.java| 41 +++---
 18 files changed, 197 insertions(+), 170 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java
index f0c00c4cdee..07f0e816193 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java
@@ -102,7 +102,9 @@ public abstract class FSMainOperationsBaseTest extends 
FileSystemTestHelper {
   
   @After
   public void tearDown() throws Exception {
-fSys.delete(new Path(getAbsoluteTestRootPath(fSys), new Path("test")), 
true);
+if (fSys != null) {
+  fSys.delete(new Path(getAbsoluteTestRootPath(fSys), new Path("test")), 
true);
+}
   }
   
   
@@ -192,7 +194,7 @@ public abstract class FSMainOperationsBaseTest extends 
FileSystemTestHelper {
   
   @Test
   public void testWDAbsolute() throws IOException {
-Path absoluteDir = new Path(fSys.getUri() + "/test/existingDir");
+Path absoluteDir = getTestRootPath(fSys, "test/existingDir");
 fSys.mkdirs(absoluteDir);
 fSys.setWorkingDirectory(absoluteDir);
 Assert.assertEquals(absoluteDir, fSys.getWorkingDirectory());
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
index 4c90490b090..6897a0d1943 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
@@ -81,6 +81,12 @@ public abstract class FileContextMainOperationsBaseTest  {
   protected final FileContextTestHelper fileContextTestHelper =
 createFileContextHelper();
 
+  /**
+   * Create the test helper.
+   * Important: this is invoked during the construction of the base class,
+   * so is very brittle.
+   * @return a test helper.
+   */
   protected FileContextTestHelper createFileContextHelper() {
 return new FileContextTestHelper();
   }
@@ -107,7 +113,7 @@ public abstract class FileContextMainOperationsBaseTest  {
   
   private static final byte[] data = getFileData(numBlock

(hadoop) branch branch-3.4 updated: HADOOP-18931. FileSystem.getFileSystemClass() to log the jar the .class came from (#6197)

2024-06-14 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new acec539ab33 HADOOP-18931. FileSystem.getFileSystemClass() to log the 
jar the .class came from (#6197)
acec539ab33 is described below

commit acec539ab3337129ad3e57ee562330720ba870b8
Author: Viraj Jasani 
AuthorDate: Fri Jun 14 10:14:54 2024 -0800

HADOOP-18931. FileSystem.getFileSystemClass() to log the jar the .class 
came from (#6197)

Set the log level of logger org.apache.hadoop.fs.FileSystem to DEBUG to see 
this.

Contributed by Viraj Jasani
---
 .../main/java/org/apache/hadoop/fs/FileSystem.java | 10 -
 .../java/org/apache/hadoop/util/ClassUtil.java | 22 ---
 .../java/org/apache/hadoop/util/TestClassUtil.java | 44 +-
 3 files changed, 61 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index 2155e17328a..38ec6114517 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -3581,7 +3581,15 @@ public abstract class FileSystem extends Configured
   throw new UnsupportedFileSystemException("No FileSystem for scheme "
   + "\"" + scheme + "\"");
 }
-LOGGER.debug("FS for {} is {}", scheme, clazz);
+if (LOGGER.isDebugEnabled()) {
+  LOGGER.debug("FS for {} is {}", scheme, clazz);
+  final String jarLocation = ClassUtil.findContainingJar(clazz);
+  if (jarLocation != null) {
+LOGGER.debug("Jar location for {} : {}", clazz, jarLocation);
+  } else {
+LOGGER.debug("Class location for {} : {}", clazz, 
ClassUtil.findClassLocation(clazz));
+  }
+}
 return clazz;
   }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ClassUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ClassUtil.java
index 44c94669f51..c17445c57ce 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ClassUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ClassUtil.java
@@ -36,13 +36,25 @@ public class ClassUtil {
* @return a jar file that contains the class, or null.
*/
   public static String findContainingJar(Class clazz) {
-ClassLoader loader = clazz.getClassLoader();
-String classFile = clazz.getName().replaceAll("\\.", "/") + ".class";
+return findContainingResource(clazz.getClassLoader(), clazz.getName(), 
"jar");
+  }
+
+  /**
+   * Find the absolute location of the class.
+   *
+   * @param clazz the class to find.
+   * @return the class file with absolute location, or null.
+   */
+  public static String findClassLocation(Class clazz) {
+return findContainingResource(clazz.getClassLoader(), clazz.getName(), 
"file");
+  }
+
+  private static String findContainingResource(ClassLoader loader, String 
clazz, String resource) {
+String classFile = clazz.replaceAll("\\.", "/") + ".class";
 try {
-  for(final Enumeration itr = loader.getResources(classFile);
-  itr.hasMoreElements();) {
+  for (final Enumeration itr = loader.getResources(classFile); 
itr.hasMoreElements();) {
 final URL url = itr.nextElement();
-if ("jar".equals(url.getProtocol())) {
+if (resource.equals(url.getProtocol())) {
   String toReturn = url.getPath();
   if (toReturn.startsWith("file:")) {
 toReturn = toReturn.substring("file:".length());
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java
index 04337929abd..3a7e12e8f03 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestClassUtil.java
@@ -20,21 +20,47 @@ package org.apache.hadoop.util;
 
 import java.io.File;
 
-import org.junit.Assert;
+import org.apache.hadoop.fs.viewfs.ViewFileSystem;
 
-import org.apache.log4j.Logger;
+import org.assertj.core.api.Assertions;
 import org.junit.Test;
 
 public class TestClassUtil {
+
   @Test(timeout=1)
   public void testFindContainingJar() {
-String containingJar = ClassUtil.findContainingJar(Logger.class);
-Assert.assertNotNull("Containing jar no

(hadoop) branch trunk updated (2bde5ccb813 -> 240fddcf17f)

2024-06-14 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 2bde5ccb813 HADOOP-19192. Log level is WARN when fail to load native 
hadoop libs (#6863)
 add 240fddcf17f HADOOP-18931. FileSystem.getFileSystemClass() to log the 
jar the .class came from (#6197)

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/hadoop/fs/FileSystem.java | 10 -
 .../java/org/apache/hadoop/util/ClassUtil.java | 22 ---
 .../java/org/apache/hadoop/util/TestClassUtil.java | 44 +-
 3 files changed, 61 insertions(+), 15 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19192. Log level is WARN when fail to load native hadoop libs (#6863)

2024-06-14 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 4eeb10318f0 HADOOP-19192. Log level is WARN when fail to load native 
hadoop libs (#6863)
4eeb10318f0 is described below

commit 4eeb10318f0ee166446bb2bc9f311f3012a8e9d4
Author: Cheng Pan 
AuthorDate: Sat Jun 15 02:05:27 2024 +0800

HADOOP-19192. Log level is WARN when fail to load native hadoop libs (#6863)

Updates the documentation to be consistent with the logging.

Contributed by Cheng Pan
---
 .../hadoop-common/src/site/markdown/NativeLibraries.md.vm   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm 
b/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
index 9756c42340d..a5d93a60e07 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
@@ -104,7 +104,7 @@ The bin/hadoop script ensures that the native hadoop 
library is on the library p
 During runtime, check the hadoop log files for your MapReduce tasks.
 
 * If everything is all right, then: `DEBUG util.NativeCodeLoader - Trying to 
load the custom-built native-hadoop library...` `INFO util.NativeCodeLoader - 
Loaded the native-hadoop library`
-* If something goes wrong, then: `INFO util.NativeCodeLoader - Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable`
+* If something goes wrong, then: `WARN util.NativeCodeLoader - Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable`
 
 Check
 -


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19192. Log level is WARN when fail to load native hadoop libs (#6863)

2024-06-14 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2bde5ccb813 HADOOP-19192. Log level is WARN when fail to load native 
hadoop libs (#6863)
2bde5ccb813 is described below

commit 2bde5ccb8139aa798eedf554bf52007895777595
Author: Cheng Pan 
AuthorDate: Sat Jun 15 02:05:27 2024 +0800

HADOOP-19192. Log level is WARN when fail to load native hadoop libs (#6863)


Updates the documentation to be consistent with the logging.

Contributed by Cheng Pan
---
 .../hadoop-common/src/site/markdown/NativeLibraries.md.vm   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm 
b/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
index 9756c42340d..a5d93a60e07 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
@@ -104,7 +104,7 @@ The bin/hadoop script ensures that the native hadoop 
library is on the library p
 During runtime, check the hadoop log files for your MapReduce tasks.
 
 * If everything is all right, then: `DEBUG util.NativeCodeLoader - Trying to 
load the custom-built native-hadoop library...` `INFO util.NativeCodeLoader - 
Loaded the native-hadoop library`
-* If something goes wrong, then: `INFO util.NativeCodeLoader - Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable`
+* If something goes wrong, then: `WARN util.NativeCodeLoader - Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable`
 
 Check
 -


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19163. Use hadoop-shaded-protobuf_3_25 (#6858)

2024-06-11 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new bb30545583c HADOOP-19163. Use hadoop-shaded-protobuf_3_25 (#6858)
bb30545583c is described below

commit bb30545583c5c78199143d9cb9dd84cd3dfa8068
Author: PJ Fanning 
AuthorDate: Tue Jun 11 17:10:00 2024 +0100

HADOOP-19163. Use hadoop-shaded-protobuf_3_25 (#6858)


Contributed by PJ Fanning
---
 hadoop-common-project/hadoop-common/pom.xml | 2 +-
 hadoop-project/pom.xml  | 2 +-
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml | 6 +++---
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/hadoop-common-project/hadoop-common/pom.xml 
b/hadoop-common-project/hadoop-common/pom.xml
index a7dcbb24a9b..7521cec6a1d 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -40,7 +40,7 @@
   
 
   org.apache.hadoop.thirdparty
-  hadoop-shaded-protobuf_3_23
+  hadoop-shaded-protobuf_3_25
 
 
   org.apache.hadoop
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 5b63dd129d6..be0f58aef63 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -250,7 +250,7 @@
   
   
 org.apache.hadoop.thirdparty
-hadoop-shaded-protobuf_3_23
+hadoop-shaded-protobuf_3_25
 ${hadoop-thirdparty-protobuf.version}
   
   
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
index 217335323a7..7cef2ec4db3 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
@@ -51,7 +51,7 @@
   
 
   org.apache.hadoop.thirdparty
-  hadoop-shaded-protobuf_3_23
+  hadoop-shaded-protobuf_3_25
 
   
 
@@ -64,7 +64,7 @@
 
 
   org.apache.hadoop.thirdparty
-  hadoop-shaded-protobuf_3_23
+  hadoop-shaded-protobuf_3_25
 
 
 
@@ -75,7 +75,7 @@
   
 
   org.apache.hadoop.thirdparty
-  hadoop-shaded-protobuf_3_23
+  hadoop-shaded-protobuf_3_25
 
   
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.3 updated: HADOOP-19116. Update to zookeeper client 3.8.4 due to CVE-2024-23944. (#6638)

2024-06-11 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new bd63358c0bb HADOOP-19116. Update to zookeeper client 3.8.4 due to 
CVE-2024-23944. (#6638)
bd63358c0bb is described below

commit bd63358c0bb53bb1097f38ebf6c125547fe5e547
Author: PJ Fanning 
AuthorDate: Tue Jun 11 13:09:23 2024 +0100

HADOOP-19116. Update to zookeeper client 3.8.4 due to CVE-2024-23944. 
(#6638)


Updated ZK client dependency to 3.8.4 to address  CVE-2024-23944.

Contributed by PJ Fanning
---
 LICENSE-binary |  2 +-
 hadoop-project/pom.xml | 18 +-
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 890f1a75f38..30fe3f701c4 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -341,7 +341,7 @@ org.apache.kerby:kerby-util:1.0.1
 org.apache.kerby:kerby-xdr:1.0.1
 org.apache.kerby:token-provider:1.0.1
 org.apache.yetus:audience-annotations:0.5.0
-org.apache.zookeeper:zookeeper:3.7.2
+org.apache.zookeeper:zookeeper:3.8.4
 org.codehaus.jettison:jettison:1.5.4
 org.eclipse.jetty:jetty-annotations:9.4.53.v20231009
 org.eclipse.jetty:jetty-http:9.4.53.v20231009
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 13318c07cb8..1c08648eeb1 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -105,7 +105,7 @@
 
${hadoop-thirdparty-shaded-prefix}.protobuf
 
${hadoop-thirdparty-shaded-prefix}.com.google.common
 
-3.7.2
+3.8.4
 5.2.0
 3.0.5
 2.1.7
@@ -1415,6 +1415,14 @@
 log4j
 log4j
   
+  
+ch.qos.logback
+logback-core
+  
+  
+ch.qos.logback
+logback-classic
+  
   
 org.slf4j
 slf4j-api
@@ -1463,6 +1471,14 @@
 log4j
 log4j
   
+  
+ch.qos.logback
+logback-core
+  
+  
+ch.qos.logback
+logback-classic
+  
   
 org.slf4j
 slf4j-log4j12


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19189. ITestS3ACommitterFactory failing (#6857)

2024-06-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 5826b15fe6a8 HADOOP-19189. ITestS3ACommitterFactory failing (#6857)
5826b15fe6a8 is described below

commit 5826b15fe6a8b2723b084e042b098c99cfca8edb
Author: Steve Loughran 
AuthorDate: Fri Jun 7 17:34:01 2024 +0100

HADOOP-19189. ITestS3ACommitterFactory failing (#6857)

* parameterize the test run rather than do it from within the test suite.
* log what the committer factory is up to (and improve its logging)
* close all filesystems, then create the test filesystem with cache enabled.

The cache is critical, we want the fs from cache to be used when querying
filesystem properties, rather than one created from the committer jobconf,
which will have the same options as the task committer, so not actually
validate the override logic.

Contributed by Steve Loughran
---
 .../fs/s3a/commit/AbstractS3ACommitterFactory.java |   5 +-
 .../hadoop/fs/s3a/commit/S3ACommitterFactory.java  |   7 +-
 .../fs/s3a/commit/ITestS3ACommitterFactory.java| 234 +
 .../hadoop-aws/src/test/resources/log4j.properties |   2 +
 4 files changed, 151 insertions(+), 97 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
index 6e7a99f50ef9..cbbe5fdc602d 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
@@ -51,9 +51,10 @@ public abstract class AbstractS3ACommitterFactory
   throw new PathCommitException(outputPath,
   "Filesystem not supported by this committer");
 }
-LOG.info("Using Committer {} for {}",
+LOG.info("Using Committer {} for {} created by {}",
 outputCommitter,
-outputPath);
+outputPath,
+this);
 return outputCommitter;
   }
 
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/S3ACommitterFactory.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/S3ACommitterFactory.java
index 36d0af187d3c..7f5455b6098d 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/S3ACommitterFactory.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/S3ACommitterFactory.java
@@ -113,11 +113,14 @@ public class S3ACommitterFactory extends 
AbstractS3ACommitterFactory {
 // job/task configurations.
 Configuration fsConf = fileSystem.getConf();
 
-String name = fsConf.getTrimmed(FS_S3A_COMMITTER_NAME, 
COMMITTER_NAME_FILE);
+String name = fsConf.getTrimmed(FS_S3A_COMMITTER_NAME, "");
+LOG.debug("Committer from filesystems \"{}\"", name);
+
 name = taskConf.getTrimmed(FS_S3A_COMMITTER_NAME, name);
-LOG.debug("Committer option is {}", name);
+LOG.debug("Committer option is \"{}\"", name);
 switch (name) {
 case COMMITTER_NAME_FILE:
+case "":
   factory = null;
   break;
 case COMMITTER_NAME_DIRECTORY:
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java
index 2ad2568d5cc2..2561a69f60b5 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java
@@ -19,15 +19,24 @@
 package org.apache.hadoop.fs.s3a.commit;
 
 import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
 
 import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIOException;
 import org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter;
 import org.apache.hadoop.fs.s3a.commit.staging.DirectoryStagingCommitter;
 import org.apache.hadoop.fs.s3a.commit.staging.PartitionedStagingCommitter;
 import org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter;
+import org.apache.hadoop.mapred.JobConf;
 import org.apache.hadoop.mapreduce.MRJobConfig;
 import org.apache.hadoop.mapreduce.TaskAttemptContext;
 import org.apache.hadoop.mapreduce.TaskAttemp

(hadoop) branch branch-3.3 updated: HADOOP-19178: [WASB Deprecation] Updating Documentation on Upcoming Plans for Hadoop-Azure (#6862)

2024-06-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 0c3c9d199702 HADOOP-19178: [WASB Deprecation] Updating Documentation 
on Upcoming Plans for Hadoop-Azure (#6862)
0c3c9d199702 is described below

commit 0c3c9d199702488e9dc971107cb1e6796fe87a8c
Author: Anuj Modi <128447756+anujmodi2...@users.noreply.github.com>
AuthorDate: Fri Jun 7 18:58:24 2024 +0530

HADOOP-19178: [WASB Deprecation] Updating Documentation on Upcoming Plans 
for Hadoop-Azure (#6862)

Contributed by Anuj Modi
---
 .../hadoop-azure/src/site/markdown/index.md|  1 +
 .../hadoop-azure/src/site/markdown/wasb.md | 97 ++
 2 files changed, 98 insertions(+)

diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/index.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/index.md
index 2af6b498a274..eba49967cbc1 100644
--- a/hadoop-tools/hadoop-azure/src/site/markdown/index.md
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/index.md
@@ -18,6 +18,7 @@
 
 See also:
 
+* [WASB](./wasb.html)
 * [ABFS](./abfs.html)
 * [Testing](./testing_azure.html)
 
diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/wasb.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/wasb.md
new file mode 100644
index ..270fd14da4c4
--- /dev/null
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/wasb.md
@@ -0,0 +1,97 @@
+
+
+# Hadoop Azure Support: WASB Driver
+
+## Introduction
+WASB Driver is a legacy Hadoop File System driver that was developed to support
+[FNS(FlatNameSpace) Azure Storage 
accounts](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction)
+that do not honor File-Folder syntax.
+HDFS Folder operations hence are mimicked at client side by WASB driver and
+certain folder operations like Rename and Delete can lead to a lot of IOPs with
+client-side enumeration and orchestration of rename/delete operation blob by 
blob.
+It was not ideal for other APIs too as initial checks for path is a file or 
folder
+needs to be done over multiple metadata calls. These led to a degraded 
performance.
+
+To provide better service to Analytics users, Microsoft released [ADLS 
Gen2](https://learn.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction)
+which are HNS (Hierarchical Namespace) enabled, i.e. File-Folder aware storage 
accounts.
+ABFS driver was designed to overcome the inherent deficiencies of WASB and 
users
+were informed to migrate to ABFS driver.
+
+### Challenges and limitations of WASB Driver
+Users of the legacy WASB driver face a number of challenges and limitations:
+1. They cannot leverage the optimizations and benefits of the latest ABFS 
driver.
+2. They need to deal with the compatibility issues should the files and 
folders were
+modified with the legacy WASB driver and the ABFS driver concurrently in a 
phased
+transition situation.
+3. There are differences for supported features for FNS and HNS over ABFS 
Driver.
+4. In certain cases, they must perform a significant amount of re-work on their
+workloads to migrate to the ABFS driver, which is available only on HNS enabled
+accounts in a fully tested and supported scenario.
+
+## Deprecation plans for WASB Driver
+We are introducing a new feature that will enable the ABFS driver to support
+FNS accounts (over BlobEndpoint that WASB Driver uses) using the ABFS scheme.
+This feature will enable us to use the ABFS driver to interact with data 
stored in GPv2
+(General Purpose v2) storage accounts.
+
+With this feature, the users who still use the legacy WASB driver will be able
+to migrate to the ABFS driver without much re-work on their workloads. They 
will
+however need to change the URIs from the WASB scheme to the ABFS scheme.
+
+Once ABFS driver has built FNS support capability to migrate WASB users, WASB
+driver will be marked for removal in next major release. This will remove any 
ambiguity
+for new users onboards as there will be only one Microsoft driver for Azure 
Storage
+and migrating users will get SLA bound support for driver and service,
+which was not guaranteed over WASB.
+
+We anticipate that this feature will serve as a stepping stone for users to
+move to HNS enabled accounts with the ABFS driver, which is our recommended 
stack
+for big data analytics on ADLS Gen2.
+
+### Impact for existing ABFS users using ADLS Gen2 (HNS enabled account)
+This feature does not impact the existing users who are using ADLS Gen2 
Accounts
+(HNS enabled account) with ABFS driver.
+
+They do not need to make any changes to their workloads or configurations. They
+will still enjoy the benefits of HNS, such as atomic operations, fine-grained
+access control, scalability, and performance.
+
+### Official recommendation
+Microsoft continues to recommend all Big Data and Analytics users 

(hadoop) branch trunk updated: HADOOP-19189. ITestS3ACommitterFactory failing (#6857)

2024-06-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 01d257d5aa94 HADOOP-19189. ITestS3ACommitterFactory failing (#6857)
01d257d5aa94 is described below

commit 01d257d5aa94163244cd3f1149d5ba2cb9f1e6ff
Author: Steve Loughran 
AuthorDate: Fri Jun 7 17:34:01 2024 +0100

HADOOP-19189. ITestS3ACommitterFactory failing (#6857)


* parameterize the test run rather than do it from within the test suite.
* log what the committer factory is up to (and improve its logging)
* close all filesystems, then create the test filesystem with cache enabled.

The cache is critical, we want the fs from cache to be used when querying
filesystem properties, rather than one created from the committer jobconf,
which will have the same options as the task committer, so not actually
validate the override logic.

Contributed by Steve Loughran
---
 .../fs/s3a/commit/AbstractS3ACommitterFactory.java |   5 +-
 .../hadoop/fs/s3a/commit/S3ACommitterFactory.java  |   7 +-
 .../fs/s3a/commit/ITestS3ACommitterFactory.java| 234 +
 .../hadoop-aws/src/test/resources/log4j.properties |   2 +
 4 files changed, 151 insertions(+), 97 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
index 6e7a99f50ef9..cbbe5fdc602d 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitterFactory.java
@@ -51,9 +51,10 @@ public abstract class AbstractS3ACommitterFactory
   throw new PathCommitException(outputPath,
   "Filesystem not supported by this committer");
 }
-LOG.info("Using Committer {} for {}",
+LOG.info("Using Committer {} for {} created by {}",
 outputCommitter,
-outputPath);
+outputPath,
+this);
 return outputCommitter;
   }
 
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/S3ACommitterFactory.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/S3ACommitterFactory.java
index 36d0af187d3c..7f5455b6098d 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/S3ACommitterFactory.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/S3ACommitterFactory.java
@@ -113,11 +113,14 @@ public class S3ACommitterFactory extends 
AbstractS3ACommitterFactory {
 // job/task configurations.
 Configuration fsConf = fileSystem.getConf();
 
-String name = fsConf.getTrimmed(FS_S3A_COMMITTER_NAME, 
COMMITTER_NAME_FILE);
+String name = fsConf.getTrimmed(FS_S3A_COMMITTER_NAME, "");
+LOG.debug("Committer from filesystems \"{}\"", name);
+
 name = taskConf.getTrimmed(FS_S3A_COMMITTER_NAME, name);
-LOG.debug("Committer option is {}", name);
+LOG.debug("Committer option is \"{}\"", name);
 switch (name) {
 case COMMITTER_NAME_FILE:
+case "":
   factory = null;
   break;
 case COMMITTER_NAME_DIRECTORY:
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java
index 2ad2568d5cc2..2561a69f60b5 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java
@@ -19,15 +19,24 @@
 package org.apache.hadoop.fs.s3a.commit;
 
 import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
 
 import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathIOException;
 import org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter;
 import org.apache.hadoop.fs.s3a.commit.staging.DirectoryStagingCommitter;
 import org.apache.hadoop.fs.s3a.commit.staging.PartitionedStagingCommitter;
 import org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter;
+import org.apache.hadoop.mapred.JobConf;
 import org.apache.hadoop.mapreduce.MRJobConfig;
 import org.apache.hadoop.mapreduce.TaskAttemptContext;
 import org.apache.hadoop.mapreduce.TaskAttemp

(hadoop) branch branch-3.4 updated: HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication (#6552)

2024-06-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 93c787be00de HADOOP-18516: [ABFS][Authentication] Support Fixed SAS 
Token for ABFS Authentication (#6552)
93c787be00de is described below

commit 93c787be00de67d085d5731450c9f075ebcbadf5
Author: Anuj Modi <128447756+anujmodi2...@users.noreply.github.com>
AuthorDate: Fri Jun 7 19:03:23 2024 +0530

HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS 
Authentication (#6552)

Contributed by Anuj Modi
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  75 ++---
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|   3 +-
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |   2 +-
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   5 +-
 .../hadoop/fs/azurebfs/services/AbfsClient.java|   9 +-
 .../azurebfs/services/FixedSASTokenProvider.java   |  65 
 .../hadoop-azure/src/site/markdown/abfs.md | 149 ++---
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |  23 ++-
 .../ITestAzureBlobFileSystemChooseSAS.java | 182 +
 .../extensions/MockDelegationSASTokenProvider.java |   2 +-
 .../azurebfs/extensions/MockSASTokenProvider.java  |  16 +-
 .../fs/azurebfs/utils/AccountSASGenerator.java | 103 
 .../hadoop/fs/azurebfs/utils/SASGenerator.java |  34 +++-
 .../fs/azurebfs/utils/ServiceSASGenerator.java |  15 +-
 14 files changed, 611 insertions(+), 72 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index a1b6fc12a5ce..1bca79628702 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -22,6 +22,7 @@ import java.io.IOException;
 import java.lang.reflect.Field;
 
 import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.services.FixedSASTokenProvider;
 import org.apache.hadoop.util.Preconditions;
 
 import org.apache.commons.lang3.StringUtils;
@@ -980,33 +981,63 @@ public class AbfsConfiguration{
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
-  Class sasTokenProviderClass =
-  getTokenProviderClass(authType, configKey, null,
-  SASTokenProvider.class);
-
-  Preconditions.checkArgument(sasTokenProviderClass != null,
-  String.format("The configuration value for \"%s\" is invalid.", 
configKey));
-
-  SASTokenProvider sasTokenProvider = ReflectionUtils
-  .newInstance(sasTokenProviderClass, rawConfig);
-  Preconditions.checkArgument(sasTokenProvider != null,
-  String.format("Failed to initialize %s", sasTokenProviderClass));
-
-  LOG.trace("Initializing {}", sasTokenProviderClass.getName());
-  sasTokenProvider.initialize(rawConfig, accountName);
-  LOG.trace("{} init complete", sasTokenProviderClass.getName());
-  return sasTokenProvider;
+  Class customSasTokenProviderImplementation =
+  getTokenProviderClass(authTyp

(hadoop) branch branch-3.4 updated: HADOOP-19154. Upgrade bouncycastle to 1.78.1 due to CVEs (#6755) (#6866)

2024-06-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 59b29800bdce HADOOP-19154. Upgrade bouncycastle to 1.78.1 due to CVEs 
(#6755) (#6866)
59b29800bdce is described below

commit 59b29800bdce4bfe9b52ccf2d8bd6eeb0f35176e
Author: PJ Fanning 
AuthorDate: Fri Jun 7 14:32:27 2024 +0100

HADOOP-19154. Upgrade bouncycastle to 1.78.1 due to CVEs (#6755) (#6866)


Addresses

* CVE-2024-29857 - Importing an EC certificate with specially crafted F2m 
parameters can cause high CPU usage during parameter evaluation.
* CVE-2024-30171 - Possible timing based leakage in RSA based handshakes 
due to exception processing eliminated.
* CVE-2024-30172 - Crafted signature and public key can be used to trigger 
an infinite loop in the Ed25519 verification code.
* CVE-2024-301XX - When endpoint identification is enabled and an SSL 
socket is not created with an explicit hostname (as happens with 
HttpsURLConnection), hostname verification could be performed against a 
DNS-resolved IP address.

Contributed by PJ Fanning
---
 LICENSE-binary  | 6 +++---
 .../hadoop-cos/src/site/markdown/cloud-storage/index.md | 2 +-
 hadoop-project/pom.xml  | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 92d20725b813..3ab3ef5d5e28 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -481,9 +481,9 @@ com.microsoft.azure:azure-cosmosdb-gateway:2.4.5
 com.microsoft.azure:azure-data-lake-store-sdk:2.3.3
 com.microsoft.azure:azure-keyvault-core:1.0.0
 com.microsoft.sqlserver:mssql-jdbc:6.2.1.jre7
-org.bouncycastle:bcpkix-jdk18on:1.77
-org.bouncycastle:bcprov-jdk18on:1.77
-org.bouncycastle:bcutil-jdk18on:1.77
+org.bouncycastle:bcpkix-jdk18on:1.78.1
+org.bouncycastle:bcprov-jdk18on:1.78.1
+org.bouncycastle:bcutil-jdk18on:1.78.1
 org.checkerframework:checker-qual:2.5.2
 org.codehaus.mojo:animal-sniffer-annotations:1.21
 org.jruby.jcodings:jcodings:1.0.13
diff --git 
a/hadoop-cloud-storage-project/hadoop-cos/src/site/markdown/cloud-storage/index.md
 
b/hadoop-cloud-storage-project/hadoop-cos/src/site/markdown/cloud-storage/index.md
index 64647b03e9ba..60c9c9065946 100644
--- 
a/hadoop-cloud-storage-project/hadoop-cos/src/site/markdown/cloud-storage/index.md
+++ 
b/hadoop-cloud-storage-project/hadoop-cos/src/site/markdown/cloud-storage/index.md
@@ -86,7 +86,7 @@ Linux kernel 2.6+
 - joda-time (version 2.9.9 recommended)
 - httpClient (version 4.5.1 or later recommended)
 - Jackson: jackson-core, jackson-databind, jackson-annotations (version 2.9.8 
or later)
-- bcprov-jdk18on (version 1.77 recommended)
+- bcprov-jdk18on (version 1.78.1 recommended)
 
 
  Configure Properties
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index f7b13344ea6c..4e42e3c895e9 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -111,7 +111,7 @@
 27.0-jre
 4.2.3
 
-1.77
+1.78.1
 
 
 2.0.0.AM26


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19178: [WASB Deprecation] Updating Documentation on Upcoming Plans for Hadoop-Azure (#6862)

2024-06-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new f2e16c586958 HADOOP-19178: [WASB Deprecation] Updating Documentation 
on Upcoming Plans for Hadoop-Azure (#6862)
f2e16c586958 is described below

commit f2e16c58695800a8444cceed6d6cea9ac5ca1599
Author: Anuj Modi <128447756+anujmodi2...@users.noreply.github.com>
AuthorDate: Fri Jun 7 18:58:24 2024 +0530

HADOOP-19178: [WASB Deprecation] Updating Documentation on Upcoming Plans 
for Hadoop-Azure (#6862)

Contributed by Anuj Modi
---
 .../hadoop-azure/src/site/markdown/index.md|  1 +
 .../hadoop-azure/src/site/markdown/wasb.md | 97 ++
 2 files changed, 98 insertions(+)

diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/index.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/index.md
index 595353896d12..177ab282c112 100644
--- a/hadoop-tools/hadoop-azure/src/site/markdown/index.md
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/index.md
@@ -18,6 +18,7 @@
 
 See also:
 
+* [WASB](./wasb.html)
 * [ABFS](./abfs.html)
 * [Testing](./testing_azure.html)
 
diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/wasb.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/wasb.md
new file mode 100644
index ..270fd14da4c4
--- /dev/null
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/wasb.md
@@ -0,0 +1,97 @@
+
+
+# Hadoop Azure Support: WASB Driver
+
+## Introduction
+WASB Driver is a legacy Hadoop File System driver that was developed to support
+[FNS(FlatNameSpace) Azure Storage 
accounts](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction)
+that do not honor File-Folder syntax.
+HDFS Folder operations hence are mimicked at client side by WASB driver and
+certain folder operations like Rename and Delete can lead to a lot of IOPs with
+client-side enumeration and orchestration of rename/delete operation blob by 
blob.
+It was not ideal for other APIs too as initial checks for path is a file or 
folder
+needs to be done over multiple metadata calls. These led to a degraded 
performance.
+
+To provide better service to Analytics users, Microsoft released [ADLS 
Gen2](https://learn.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction)
+which are HNS (Hierarchical Namespace) enabled, i.e. File-Folder aware storage 
accounts.
+ABFS driver was designed to overcome the inherent deficiencies of WASB and 
users
+were informed to migrate to ABFS driver.
+
+### Challenges and limitations of WASB Driver
+Users of the legacy WASB driver face a number of challenges and limitations:
+1. They cannot leverage the optimizations and benefits of the latest ABFS 
driver.
+2. They need to deal with the compatibility issues should the files and 
folders were
+modified with the legacy WASB driver and the ABFS driver concurrently in a 
phased
+transition situation.
+3. There are differences for supported features for FNS and HNS over ABFS 
Driver.
+4. In certain cases, they must perform a significant amount of re-work on their
+workloads to migrate to the ABFS driver, which is available only on HNS enabled
+accounts in a fully tested and supported scenario.
+
+## Deprecation plans for WASB Driver
+We are introducing a new feature that will enable the ABFS driver to support
+FNS accounts (over BlobEndpoint that WASB Driver uses) using the ABFS scheme.
+This feature will enable us to use the ABFS driver to interact with data 
stored in GPv2
+(General Purpose v2) storage accounts.
+
+With this feature, the users who still use the legacy WASB driver will be able
+to migrate to the ABFS driver without much re-work on their workloads. They 
will
+however need to change the URIs from the WASB scheme to the ABFS scheme.
+
+Once ABFS driver has built FNS support capability to migrate WASB users, WASB
+driver will be marked for removal in next major release. This will remove any 
ambiguity
+for new users onboards as there will be only one Microsoft driver for Azure 
Storage
+and migrating users will get SLA bound support for driver and service,
+which was not guaranteed over WASB.
+
+We anticipate that this feature will serve as a stepping stone for users to
+move to HNS enabled accounts with the ABFS driver, which is our recommended 
stack
+for big data analytics on ADLS Gen2.
+
+### Impact for existing ABFS users using ADLS Gen2 (HNS enabled account)
+This feature does not impact the existing users who are using ADLS Gen2 
Accounts
+(HNS enabled account) with ABFS driver.
+
+They do not need to make any changes to their workloads or configurations. They
+will still enjoy the benefits of HNS, such as atomic operations, fine-grained
+access control, scalability, and performance.
+
+### Official recommendation
+Microsoft continues to recommend all Big Data and Analytics users 

(hadoop) branch trunk updated: HADOOP-19178: [WASB Deprecation] Updating Documentation on Upcoming Plans for Hadoop-Azure (#6862)

2024-06-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new bbb17e76a7a8 HADOOP-19178: [WASB Deprecation] Updating Documentation 
on Upcoming Plans for Hadoop-Azure (#6862)
bbb17e76a7a8 is described below

commit bbb17e76a7a8a995a8b202c9b9530f39bb2a2957
Author: Anuj Modi <128447756+anujmodi2...@users.noreply.github.com>
AuthorDate: Fri Jun 7 18:58:24 2024 +0530

HADOOP-19178: [WASB Deprecation] Updating Documentation on Upcoming Plans 
for Hadoop-Azure (#6862)


Contributed by Anuj Modi
---
 .../hadoop-azure/src/site/markdown/index.md|  1 +
 .../hadoop-azure/src/site/markdown/wasb.md | 97 ++
 2 files changed, 98 insertions(+)

diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/index.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/index.md
index 595353896d12..177ab282c112 100644
--- a/hadoop-tools/hadoop-azure/src/site/markdown/index.md
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/index.md
@@ -18,6 +18,7 @@
 
 See also:
 
+* [WASB](./wasb.html)
 * [ABFS](./abfs.html)
 * [Testing](./testing_azure.html)
 
diff --git a/hadoop-tools/hadoop-azure/src/site/markdown/wasb.md 
b/hadoop-tools/hadoop-azure/src/site/markdown/wasb.md
new file mode 100644
index ..270fd14da4c4
--- /dev/null
+++ b/hadoop-tools/hadoop-azure/src/site/markdown/wasb.md
@@ -0,0 +1,97 @@
+
+
+# Hadoop Azure Support: WASB Driver
+
+## Introduction
+WASB Driver is a legacy Hadoop File System driver that was developed to support
+[FNS(FlatNameSpace) Azure Storage 
accounts](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction)
+that do not honor File-Folder syntax.
+HDFS Folder operations hence are mimicked at client side by WASB driver and
+certain folder operations like Rename and Delete can lead to a lot of IOPs with
+client-side enumeration and orchestration of rename/delete operation blob by 
blob.
+It was not ideal for other APIs too as initial checks for path is a file or 
folder
+needs to be done over multiple metadata calls. These led to a degraded 
performance.
+
+To provide better service to Analytics users, Microsoft released [ADLS 
Gen2](https://learn.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-introduction)
+which are HNS (Hierarchical Namespace) enabled, i.e. File-Folder aware storage 
accounts.
+ABFS driver was designed to overcome the inherent deficiencies of WASB and 
users
+were informed to migrate to ABFS driver.
+
+### Challenges and limitations of WASB Driver
+Users of the legacy WASB driver face a number of challenges and limitations:
+1. They cannot leverage the optimizations and benefits of the latest ABFS 
driver.
+2. They need to deal with the compatibility issues should the files and 
folders were
+modified with the legacy WASB driver and the ABFS driver concurrently in a 
phased
+transition situation.
+3. There are differences for supported features for FNS and HNS over ABFS 
Driver.
+4. In certain cases, they must perform a significant amount of re-work on their
+workloads to migrate to the ABFS driver, which is available only on HNS enabled
+accounts in a fully tested and supported scenario.
+
+## Deprecation plans for WASB Driver
+We are introducing a new feature that will enable the ABFS driver to support
+FNS accounts (over BlobEndpoint that WASB Driver uses) using the ABFS scheme.
+This feature will enable us to use the ABFS driver to interact with data 
stored in GPv2
+(General Purpose v2) storage accounts.
+
+With this feature, the users who still use the legacy WASB driver will be able
+to migrate to the ABFS driver without much re-work on their workloads. They 
will
+however need to change the URIs from the WASB scheme to the ABFS scheme.
+
+Once ABFS driver has built FNS support capability to migrate WASB users, WASB
+driver will be marked for removal in next major release. This will remove any 
ambiguity
+for new users onboards as there will be only one Microsoft driver for Azure 
Storage
+and migrating users will get SLA bound support for driver and service,
+which was not guaranteed over WASB.
+
+We anticipate that this feature will serve as a stepping stone for users to
+move to HNS enabled accounts with the ABFS driver, which is our recommended 
stack
+for big data analytics on ADLS Gen2.
+
+### Impact for existing ABFS users using ADLS Gen2 (HNS enabled account)
+This feature does not impact the existing users who are using ADLS Gen2 
Accounts
+(HNS enabled account) with ABFS driver.
+
+They do not need to make any changes to their workloads or configurations. They
+will still enjoy the benefits of HNS, such as atomic operations, fine-grained
+access control, scalability, and performance.
+
+### Official recommendation
+Microsoft continues to recommend all Big Data and Analytics users to use

(hadoop) branch branch-3.4 updated: HADOOP-19114. Upgrade to commons-compress 1.26.1 due to CVEs. (#6636)

2024-06-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 935bc184fa21 HADOOP-19114. Upgrade to commons-compress 1.26.1 due to 
CVEs. (#6636)
935bc184fa21 is described below

commit 935bc184fa21af3d3fde27b07ebac5a031725fc9
Author: PJ Fanning 
AuthorDate: Fri Jun 7 14:15:22 2024 +0100

HADOOP-19114. Upgrade to commons-compress 1.26.1 due to CVEs. (#6636)


This addresses two CVEs triggered by malformed archives

Important: Denial of Service CVE-2024-25710
Moderate: Denial of Service CVE-2024-26308

Contributed by PJ Fanning
---
 LICENSE-binary| 2 +-
 .../java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java | 4 ++--
 hadoop-project/pom.xml| 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 8f73a5def8d9..92d20725b813 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -298,7 +298,7 @@ net.java.dev.jna:jna:5.2.0
 net.minidev:accessors-smart:1.2
 org.apache.avro:avro:1.9.2
 org.apache.commons:commons-collections4:4.2
-org.apache.commons:commons-compress:1.24.0
+org.apache.commons:commons-compress:1.26.1
 org.apache.commons:commons-configuration2:2.10.1
 org.apache.commons:commons-csv:1.9.0
 org.apache.commons:commons-digester:1.8.1
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java
index 452078ff8ec0..0408b6c1eacd 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java
@@ -22,7 +22,7 @@ import org.apache.hadoop.classification.VisibleForTesting;
 import org.apache.commons.cli.HelpFormatter;
 import org.apache.commons.cli.Option;
 import org.apache.commons.cli.Options;
-import org.apache.commons.compress.archivers.ArchiveEntry;
+import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
 import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.BlockLocation;
@@ -337,7 +337,7 @@ public class FrameworkUploader implements Runnable {
 LOG.info("Adding " + fullPath);
 File file = new File(fullPath);
 try (FileInputStream inputStream = new FileInputStream(file)) {
-  ArchiveEntry entry = out.createArchiveEntry(file, file.getName());
+  TarArchiveEntry entry = out.createArchiveEntry(file, file.getName());
   out.putArchiveEntry(entry);
   IOUtils.copyBytes(inputStream, out, 1024 * 1024);
   out.closeArchiveEntry();
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 889f8c94b47c..f7b13344ea6c 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -122,7 +122,7 @@
 1.5.0
 1.15
 3.2.2
-1.24.0
+1.26.1
 1.9.0
 2.14.0
 3.12.0


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19154. Upgrade bouncycastle to 1.78.1 due to CVEs (#6755)

2024-06-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2ee0bf953492 HADOOP-19154. Upgrade bouncycastle to 1.78.1 due to CVEs 
(#6755)
2ee0bf953492 is described below

commit 2ee0bf953492b66765d3d2c902407fbf9bceddec
Author: PJ Fanning 
AuthorDate: Wed Jun 5 15:31:23 2024 +0100

HADOOP-19154. Upgrade bouncycastle to 1.78.1 due to CVEs (#6755)


Addresses

* CVE-2024-29857 - Importing an EC certificate with specially crafted F2m 
parameters can cause high CPU usage during parameter evaluation.
* CVE-2024-30171 - Possible timing based leakage in RSA based handshakes 
due to exception processing eliminated.
* CVE-2024-30172 - Crafted signature and public key can be used to trigger 
an infinite loop in the Ed25519 verification code.
* CVE-2024-301XX - When endpoint identification is enabled and an SSL 
socket is not created with an explicit hostname (as happens with 
HttpsURLConnection), hostname verification could be performed against a 
DNS-resolved IP address.

Contributed by PJ Fanning
---
 LICENSE-binary  | 6 +++---
 .../hadoop-cos/src/site/markdown/cloud-storage/index.md | 2 +-
 hadoop-project/pom.xml  | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 32f9f06ae15d..42e97f487535 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -482,9 +482,9 @@ com.microsoft.azure:azure-cosmosdb-gateway:2.4.5
 com.microsoft.azure:azure-data-lake-store-sdk:2.3.3
 com.microsoft.azure:azure-keyvault-core:1.0.0
 com.microsoft.sqlserver:mssql-jdbc:6.2.1.jre7
-org.bouncycastle:bcpkix-jdk18on:1.77
-org.bouncycastle:bcprov-jdk18on:1.77
-org.bouncycastle:bcutil-jdk18on:1.77
+org.bouncycastle:bcpkix-jdk18on:1.78.1
+org.bouncycastle:bcprov-jdk18on:1.78.1
+org.bouncycastle:bcutil-jdk18on:1.78.1
 org.checkerframework:checker-qual:2.5.2
 org.codehaus.mojo:animal-sniffer-annotations:1.21
 org.jruby.jcodings:jcodings:1.0.13
diff --git 
a/hadoop-cloud-storage-project/hadoop-cos/src/site/markdown/cloud-storage/index.md
 
b/hadoop-cloud-storage-project/hadoop-cos/src/site/markdown/cloud-storage/index.md
index 64647b03e9ba..60c9c9065946 100644
--- 
a/hadoop-cloud-storage-project/hadoop-cos/src/site/markdown/cloud-storage/index.md
+++ 
b/hadoop-cloud-storage-project/hadoop-cos/src/site/markdown/cloud-storage/index.md
@@ -86,7 +86,7 @@ Linux kernel 2.6+
 - joda-time (version 2.9.9 recommended)
 - httpClient (version 4.5.1 or later recommended)
 - Jackson: jackson-core, jackson-databind, jackson-annotations (version 2.9.8 
or later)
-- bcprov-jdk18on (version 1.77 recommended)
+- bcprov-jdk18on (version 1.78.1 recommended)
 
 
  Configure Properties
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 0345925e9994..a8ef068bf8da 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -111,7 +111,7 @@
 27.0-jre
 4.2.3
 
-1.78
+1.78.1
 
 
 2.0.0.AM26


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19193. Create orphan commit for website deployment (#6864)

2024-06-05 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d8d3d538e463 HADOOP-19193. Create orphan commit for website deployment 
(#6864)
d8d3d538e463 is described below

commit d8d3d538e463e7cb651dfe013507fa6c4576b8dc
Author: Cheng Pan 
AuthorDate: Wed Jun 5 22:25:48 2024 +0800

HADOOP-19193. Create orphan commit for website deployment (#6864)


This stop gh-pages deployments from increasing the size of the git 
repository on every run

Contributed by Cheng Pan
---
 .github/workflows/website.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.github/workflows/website.yml b/.github/workflows/website.yml
index 6d925f3dcff2..67b2b908d273 100644
--- a/.github/workflows/website.yml
+++ b/.github/workflows/website.yml
@@ -56,4 +56,5 @@ jobs:
   publish_dir: ./staging/hadoop-project
   user_name: 'github-actions[bot]'
   user_email: 'github-actions[bot]@users.noreply.github.com'
+  force_orphan: true
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication (#6552)

2024-05-30 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d8b485a51229 HADOOP-18516: [ABFS][Authentication] Support Fixed SAS 
Token for ABFS Authentication (#6552)
d8b485a51229 is described below

commit d8b485a51229392874eb54f86bc3fdc61ce6084e
Author: Anuj Modi <128447756+anujmodi2...@users.noreply.github.com>
AuthorDate: Fri May 31 01:16:19 2024 +0530

HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS 
Authentication (#6552)


Contributed by Anuj Modi
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  75 ++---
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|   3 +-
 .../fs/azurebfs/AzureBlobFileSystemStore.java  |   2 +-
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   5 +-
 .../hadoop/fs/azurebfs/services/AbfsClient.java|   9 +-
 .../azurebfs/services/FixedSASTokenProvider.java   |  65 
 .../hadoop-azure/src/site/markdown/abfs.md | 149 ++---
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |  23 ++-
 .../ITestAzureBlobFileSystemChooseSAS.java | 182 +
 .../extensions/MockDelegationSASTokenProvider.java |   2 +-
 .../azurebfs/extensions/MockSASTokenProvider.java  |  16 +-
 .../fs/azurebfs/utils/AccountSASGenerator.java | 103 
 .../hadoop/fs/azurebfs/utils/SASGenerator.java |  34 +++-
 .../fs/azurebfs/utils/ServiceSASGenerator.java |  15 +-
 14 files changed, 611 insertions(+), 72 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 6e5e772e1816..bf9008bfe6de 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -22,6 +22,7 @@ import java.io.IOException;
 import java.lang.reflect.Field;
 
 import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.azurebfs.services.FixedSASTokenProvider;
 import org.apache.hadoop.fs.azurebfs.utils.MetricFormat;
 import org.apache.hadoop.util.Preconditions;
 
@@ -1025,33 +1026,63 @@ public class AbfsConfiguration{
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
-  Class sasTokenProviderClass =
-  getTokenProviderClass(authType, configKey, null,
-  SASTokenProvider.class);
-
-  Preconditions.checkArgument(sasTokenProviderClass != null,
-  String.format("The configuration value for \"%s\" is invalid.", 
configKey));
-
-  SASTokenProvider sasTokenProvider = ReflectionUtils
-  .newInstance(sasTokenProviderClass, rawConfig);
-  Preconditions.checkArgument(sasTokenProvider != null,
-  String.format("Failed to initialize %s", sasTokenProviderClass));
-
-  LOG.trace("Initializing {}", sasTokenProviderClass.getName());
-  sasTokenProvider.initialize(rawConfig, accountName);
-  LOG.trace("{} init complete", sasTokenProviderClass.getName());
-  return sasTokenProvider;
+  Class customSasTokenProviderImplementation =
+  getTokenProvi

(hadoop) branch branch-3.4 updated: HADOOP-18679. Followup: change method name case (#6854)

2024-05-30 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new ac8a890f93a7 HADOOP-18679. Followup: change method name case (#6854)
ac8a890f93a7 is described below

commit ac8a890f93a72e9fcabef40815571a036376930c
Author: Steve Loughran 
AuthorDate: Thu May 30 19:34:30 2024 +0100

HADOOP-18679. Followup: change method name case (#6854)

WrappedIO.bulkDelete_PageSize() => bulkDelete_pageSize()

Makes it consistent with the HADOOP-19131 naming scheme.
The name needs to be fixed before invoking it through reflection,
as once that is attempted the binding won't work at run time,
though compilation will be happy.

Contributed by Steve Loughran
---
 .../src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java   | 2 +-
 .../org/apache/hadoop/fs/contract/AbstractContractBulkDeleteTest.java | 2 +-
 .../src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java  | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
index 696055895a19..286557c2c378 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
@@ -54,7 +54,7 @@ public final class WrappedIO {
* @throws IllegalArgumentException path not valid.
* @throws IOException problems resolving paths
*/
-  public static int bulkDelete_PageSize(FileSystem fs, Path path) throws 
IOException {
+  public static int bulkDelete_pageSize(FileSystem fs, Path path) throws 
IOException {
 try (BulkDelete bulk = fs.createBulkDelete(path)) {
   return bulk.pageSize();
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractBulkDeleteTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractBulkDeleteTest.java
index 9ebf9923f39c..1413e74a7e0b 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractBulkDeleteTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractBulkDeleteTest.java
@@ -69,7 +69,7 @@ public abstract class AbstractContractBulkDeleteTest extends 
AbstractFSContractT
   public void setUp() throws Exception {
 fs = getFileSystem();
 basePath = path(getClass().getName());
-pageSize = WrappedIO.bulkDelete_PageSize(getFileSystem(), basePath);
+pageSize = WrappedIO.bulkDelete_pageSize(getFileSystem(), basePath);
 fs.mkdirs(basePath);
   }
 
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
index 0676dd5b16ed..5aa72e694906 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
@@ -735,7 +735,7 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
 
 bindReadOnlyRolePolicy(assumedRoleConfig, readOnlyDir);
 roleFS = (S3AFileSystem) destDir.getFileSystem(assumedRoleConfig);
-int bulkDeletePageSize = WrappedIO.bulkDelete_PageSize(roleFS, destDir);
+int bulkDeletePageSize = WrappedIO.bulkDelete_pageSize(roleFS, destDir);
 int range = bulkDeletePageSize == 1 ? bulkDeletePageSize : 10;
 touchFiles(fs, readOnlyDir, range);
 touchFiles(roleFS, destDir, range);
@@ -769,7 +769,7 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
 bindReadOnlyRolePolicy(assumedRoleConfig, readOnlyDir);
 roleFS = (S3AFileSystem) destDir.getFileSystem(assumedRoleConfig);
 S3AFileSystem fs = getFileSystem();
-if (WrappedIO.bulkDelete_PageSize(fs, destDir) == 1) {
+if (WrappedIO.bulkDelete_pageSize(fs, destDir) == 1) {
   String msg = "Skipping as this test requires more than one path to be 
deleted in bulk";
   LOG.debug(msg);
   skip(msg);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-18679. Followup: change method name case (#6854)

2024-05-30 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d00b3acd5eca HADOOP-18679. Followup: change method name case (#6854)
d00b3acd5eca is described below

commit d00b3acd5ecac9907dae2f09f42a0c2ce4f94d86
Author: Steve Loughran 
AuthorDate: Thu May 30 19:34:30 2024 +0100

HADOOP-18679. Followup: change method name case (#6854)


WrappedIO.bulkDelete_PageSize() => bulkDelete_pageSize()

Makes it consistent with the HADOOP-19131 naming scheme.
The name needs to be fixed before invoking it through reflection,
as once that is attempted the binding won't work at run time,
though compilation will be happy.

Contributed by Steve Loughran
---
 .../src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java   | 2 +-
 .../org/apache/hadoop/fs/contract/AbstractContractBulkDeleteTest.java | 2 +-
 .../src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java  | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
index 696055895a19..286557c2c378 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java
@@ -54,7 +54,7 @@ public final class WrappedIO {
* @throws IllegalArgumentException path not valid.
* @throws IOException problems resolving paths
*/
-  public static int bulkDelete_PageSize(FileSystem fs, Path path) throws 
IOException {
+  public static int bulkDelete_pageSize(FileSystem fs, Path path) throws 
IOException {
 try (BulkDelete bulk = fs.createBulkDelete(path)) {
   return bulk.pageSize();
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractBulkDeleteTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractBulkDeleteTest.java
index 9ebf9923f39c..1413e74a7e0b 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractBulkDeleteTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractBulkDeleteTest.java
@@ -69,7 +69,7 @@ public abstract class AbstractContractBulkDeleteTest extends 
AbstractFSContractT
   public void setUp() throws Exception {
 fs = getFileSystem();
 basePath = path(getClass().getName());
-pageSize = WrappedIO.bulkDelete_PageSize(getFileSystem(), basePath);
+pageSize = WrappedIO.bulkDelete_pageSize(getFileSystem(), basePath);
 fs.mkdirs(basePath);
   }
 
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
index 0676dd5b16ed..5aa72e694906 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/auth/ITestAssumeRole.java
@@ -735,7 +735,7 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
 
 bindReadOnlyRolePolicy(assumedRoleConfig, readOnlyDir);
 roleFS = (S3AFileSystem) destDir.getFileSystem(assumedRoleConfig);
-int bulkDeletePageSize = WrappedIO.bulkDelete_PageSize(roleFS, destDir);
+int bulkDeletePageSize = WrappedIO.bulkDelete_pageSize(roleFS, destDir);
 int range = bulkDeletePageSize == 1 ? bulkDeletePageSize : 10;
 touchFiles(fs, readOnlyDir, range);
 touchFiles(roleFS, destDir, range);
@@ -769,7 +769,7 @@ public class ITestAssumeRole extends AbstractS3ATestBase {
 bindReadOnlyRolePolicy(assumedRoleConfig, readOnlyDir);
 roleFS = (S3AFileSystem) destDir.getFileSystem(assumedRoleConfig);
 S3AFileSystem fs = getFileSystem();
-if (WrappedIO.bulkDelete_PageSize(fs, destDir) == 1) {
+if (WrappedIO.bulkDelete_pageSize(fs, destDir) == 1) {
   String msg = "Skipping as this test requires more than one path to be 
deleted in bulk";
   LOG.debug(msg);
   skip(msg);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19188. Fix TestHarFileSystem and TestFilterFileSystem failing after bulk delete API got added. (#6848)

2024-05-29 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d107931fc729 HADOOP-19188. Fix TestHarFileSystem and 
TestFilterFileSystem failing after bulk delete API got added. (#6848)
d107931fc729 is described below

commit d107931fc7299a91743a38d73ec25a3d33d93abf
Author: Mukund Thakur 
AuthorDate: Wed May 29 11:27:09 2024 -0500

HADOOP-19188. Fix TestHarFileSystem and TestFilterFileSystem failing after 
bulk delete API got added. (#6848)


Follow up to: HADOOP-18679 Add API for bulk/paged delete of files and 
objects

Contributed by Mukund Thakur
---
 .../src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java| 1 +
 .../src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java   | 2 ++
 2 files changed, 3 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
index 3d8ea0e826cf..1b42290cedc5 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
@@ -148,6 +148,7 @@ public class TestFilterFileSystem {
 
 FSDataOutputStream append(Path f, int bufferSize,
 Progressable progress, boolean appendToNewBlock) throws IOException;
+BulkDelete createBulkDelete(Path path) throws IllegalArgumentException, 
IOException;
   }
 
   @Test
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
index 0287b7ec1fb8..26d0361d6a25 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
@@ -257,6 +257,8 @@ public class TestHarFileSystem {
 Progressable progress, boolean appendToNewBlock) throws IOException;
 
 Path getEnclosingRoot(Path path) throws IOException;
+
+BulkDelete createBulkDelete(Path path) throws IllegalArgumentException, 
IOException;
   }
 
   @Test


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-18962. Upgrade kafka to 3.4.0 (#6247)

2024-05-24 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1baf0e889fec HADOOP-18962. Upgrade kafka to 3.4.0 (#6247)
1baf0e889fec is described below

commit 1baf0e889fec54b6560417b62cada75daf6fe312
Author: Murali Krishna 
AuthorDate: Fri May 24 22:10:37 2024 +0530

HADOOP-18962. Upgrade kafka to 3.4.0 (#6247)


Upgrade Kafka Client due to CVEs

* CVE-2023-25194
* CVE-2021-38153
* CVE-2018-17196

Contributed by Murali Krishna
---
 LICENSE-binary | 4 ++--
 hadoop-project/pom.xml | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 8e2c57b1032b..c0258e9311b1 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -317,7 +317,7 @@ org.apache.htrace:htrace-core:3.1.0-incubating
 org.apache.htrace:htrace-core4:4.1.0-incubating
 org.apache.httpcomponents:httpclient:4.5.13
 org.apache.httpcomponents:httpcore:4.4.13
-org.apache.kafka:kafka-clients:2.8.2
+org.apache.kafka:kafka-clients:3.4.0
 org.apache.kerby:kerb-admin:2.0.3
 org.apache.kerby:kerb-client:2.0.3
 org.apache.kerby:kerb-common:2.0.3
@@ -377,7 +377,7 @@ 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/com
 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/compat/{fstatat|openat|unlinkat}.h
 
-com.github.luben:zstd-jni:1.4.9-1
+com.github.luben:zstd-jni:1.5.2-1
 dnsjava:dnsjava:2.1.7
 org.codehaus.woodstox:stax2-api:4.2.1
 
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index c795b41340f6..ba7631189a1a 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -50,7 +50,7 @@
 
 2.12.2
 
-2.8.2
+3.4.0
 
 1.0.13
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-18325: ABFS: Add correlated metric support for ABFS operations (#6314)

2024-05-23 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d168d3ffeee1 HADOOP-18325: ABFS: Add correlated metric support for 
ABFS operations (#6314)
d168d3ffeee1 is described below

commit d168d3ffeee15ea71786263d7eaa60dc92c4d3a0
Author: Anmol Asrani 
AuthorDate: Thu May 23 19:40:10 2024 +0530

HADOOP-18325: ABFS: Add correlated metric support for ABFS operations 
(#6314)


Adds support for metric collection at the filesystem instance level.
Metrics are pushed to the store upon the closure of a filesystem instance, 
encompassing all operations
that utilized that specific instance.

Collected Metrics:

- Number of successful requests without any retries.
- Count of requests that succeeded after a specified number of retries (x 
retries).
- Request count subjected to throttling.
- Number of requests that failed despite exhausting all retry attempts. etc.
Implementation Details:

Incorporated logic in the AbfsClient to facilitate metric pushing through 
an additional request.
This occurs in scenarios where no requests are sent to the backend for a 
defined idle period.
By implementing these enhancements, we ensure comprehensive monitoring and 
analysis of filesystem interactions, enabling a deeper understanding of success 
rates, retry scenarios, throttling instances, and exhaustive failure scenarios. 
Additionally, the AbfsClient logic ensures that metrics are proactively pushed 
even during idle periods, maintaining a continuous and accurate representation 
of filesystem performance.

Contributed by Anmol Asrani
---
 .../hadoop/fs/azurebfs/AbfsBackoffMetrics.java | 312 
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  45 ++
 .../hadoop/fs/azurebfs/AbfsCountersImpl.java   | 102 +++-
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|  15 +-
 .../fs/azurebfs/constants/ConfigurationKeys.java   |   7 +
 .../constants/FileSystemConfigurations.java|   6 +
 .../constants/HttpHeaderConfigurations.java|   1 +
 .../contracts/services/AzureServiceErrorCode.java  |  18 +-
 .../hadoop/fs/azurebfs/services/AbfsClient.java| 195 +++-
 .../fs/azurebfs/services/AbfsClientContext.java|   2 +-
 .../hadoop/fs/azurebfs/services/AbfsCounters.java  |  11 +
 .../fs/azurebfs/services/AbfsInputStream.java  |  18 +-
 .../azurebfs/services/AbfsReadFooterMetrics.java   | 549 +
 .../fs/azurebfs/services/AbfsRestOperation.java| 212 ++--
 .../fs/azurebfs/services/TimerFunctionality.java   |   4 +-
 .../MetricFormat.java} |  20 +-
 .../hadoop/fs/azurebfs/utils/TracingContext.java   |  22 +-
 .../hadoop-azure/src/site/markdown/abfs.md |  43 ++
 .../azurebfs/ITestAbfsInputStreamStatistics.java   |   1 -
 .../fs/azurebfs/ITestAbfsReadFooterMetrics.java| 385 +++
 .../ITestAzureBlobFileSystemListStatus.java|   9 +-
 .../fs/azurebfs/services/AbfsClientTestUtil.java   |   9 +-
 .../fs/azurebfs/services/ITestAbfsClient.java  |  21 +-
 .../fs/azurebfs/services/TestAbfsInputStream.java  |   7 +-
 .../azurebfs/services/TestAbfsRestOperation.java   |  81 +++
 .../TestAbfsRestOperationMockFailures.java |   3 +-
 26 files changed, 2021 insertions(+), 77 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsBackoffMetrics.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsBackoffMetrics.java
new file mode 100644
index ..37dbdfffeed6
--- /dev/null
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsBackoffMetrics.java
@@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicLo

(hadoop) branch trunk updated: HADOOP-18679. Add API for bulk/paged delete of files (#6726)

2024-05-20 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 47be1ab3b68b HADOOP-18679. Add API for bulk/paged delete of files 
(#6726)
47be1ab3b68b is described below

commit 47be1ab3b68b987ed8ab349fc351f438c00d9871
Author: Mukund Thakur 
AuthorDate: Mon May 20 11:05:25 2024 -0500

HADOOP-18679. Add API for bulk/paged delete of files (#6726)


Applications can create a BulkDelete instance from a
BulkDeleteSource; the BulkDelete interface provides
the pageSize(): the maximum number of entries which can be
deleted, and a bulkDelete(Collection paths)
method which can take a collection up to pageSize() long.

This is optimized for object stores with bulk delete APIs;
the S3A connector will offer the page size of
fs.s3a.bulk.delete.page.size unless bulk delete has
been disabled.

Even with a page size of 1, the S3A implementation is
more efficient than delete(path)
as there are no safety checks for the path being a directory
or probes for the need to recreate directories.

The interface BulkDeleteSource is implemented by
all FileSystem implementations, with a page size
of 1 and mapped to delete(pathToDelete, false).
This means that callers do not need to have special
case handling for object stores versus classic filesystems.

To aid use through reflection APIs, the class
org.apache.hadoop.io.wrappedio.WrappedIO
has been created with "reflection friendly" methods.

Contributed by Mukund Thakur and Steve Loughran
---
 .../main/java/org/apache/hadoop/fs/BulkDelete.java |  90 +
 .../org/apache/hadoop/fs/BulkDeleteSource.java |  53 +++
 .../java/org/apache/hadoop/fs/BulkDeleteUtils.java |  66 
 .../apache/hadoop/fs/CommonPathCapabilities.java   |   6 +
 .../main/java/org/apache/hadoop/fs/FileSystem.java |  34 +-
 .../hadoop/fs/impl/DefaultBulkDeleteOperation.java |  97 +
 .../hadoop/fs/statistics/StoreStatisticNames.java  |   6 +
 .../org/apache/hadoop/io/wrappedio/WrappedIO.java  |  93 +
 .../org/apache/hadoop/util/functional/Tuples.java  |  87 +
 .../src/site/markdown/filesystem/bulkdelete.md | 139 +++
 .../src/site/markdown/filesystem/index.md  |   3 +-
 .../contract/AbstractContractBulkDeleteTest.java   | 336 +
 .../localfs/TestLocalFSContractBulkDelete.java |  34 ++
 .../rawlocal/TestRawLocalContractBulkDelete.java   |  35 ++
 .../contract/hdfs/TestHDFSContractBulkDelete.java  |  49 +++
 .../java/org/apache/hadoop/fs/s3a/Constants.java   |  12 +
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java| 183 +-
 .../org/apache/hadoop/fs/s3a/S3AInternals.java |  12 +-
 .../java/org/apache/hadoop/fs/s3a/S3AStore.java| 129 +++
 .../java/org/apache/hadoop/fs/s3a/Statistic.java   |   8 +
 .../hadoop/fs/s3a/impl/BulkDeleteOperation.java| 128 +++
 .../s3a/impl/BulkDeleteOperationCallbacksImpl.java | 125 +++
 .../fs/s3a/impl/MultiObjectDeleteException.java|  20 +-
 .../apache/hadoop/fs/s3a/impl/S3AStoreBuilder.java | 113 ++
 .../apache/hadoop/fs/s3a/impl/S3AStoreImpl.java| 400 +
 .../hadoop/fs/s3a/impl/StoreContextFactory.java|  35 ++
 .../markdown/tools/hadoop-aws/aws_sdk_upgrade.md   |   1 +
 .../site/markdown/tools/hadoop-aws/performance.md  |  82 -
 .../contract/s3a/ITestS3AContractBulkDelete.java   | 230 
 .../apache/hadoop/fs/s3a/AbstractS3AMockTest.java  |   3 +-
 .../apache/hadoop/fs/s3a/TestS3ADeleteOnExit.java  |   3 +-
 .../apache/hadoop/fs/s3a/auth/ITestAssumeRole.java | 133 ++-
 .../fs/s3a/scale/AbstractSTestS3AHugeFiles.java|   2 +
 .../contract/ITestAbfsContractBulkDelete.java  |  50 +++
 .../src/test/resources/log4j.properties|   1 +
 35 files changed, 2679 insertions(+), 119 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BulkDelete.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BulkDelete.java
new file mode 100644
index ..ab5f73b5624f
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/BulkDelete.java
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distribut

(hadoop) branch trunk updated: MAPREDUCE-7475. Fix non-idempotent unit tests (#6785)

2024-05-17 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 41eacf4914ff MAPREDUCE-7475. Fix non-idempotent unit tests (#6785)
41eacf4914ff is described below

commit 41eacf4914ffc6129f293440a9bcc99723ef3e64
Author: Kaiyao Ke <47203510+kaiya...@users.noreply.github.com>
AuthorDate: Fri May 17 08:51:47 2024 -0500

MAPREDUCE-7475. Fix non-idempotent unit tests (#6785)


Contributed by Kaiyao Ke
---
 .../mapreduce/v2/app/webapp/TestAppController.java |  2 ++
 .../java/org/apache/hadoop/mapred/TestMapTask.java | 18 -
 .../hadoop/mapred/TestTaskProgressReporter.java|  6 ++
 .../apache/hadoop/mapred/NotificationTestCase.java |  2 ++
 .../hadoop/mapred/TestOldCombinerGrouping.java | 23 ++
 .../hadoop/mapreduce/TestNewCombinerGrouping.java  | 23 ++
 6 files changed, 53 insertions(+), 21 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAppController.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAppController.java
index ba5c43012146..473681c3e424 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAppController.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAppController.java
@@ -319,6 +319,8 @@ public class TestAppController {
 appController.attempts();
 
 assertEquals(AttemptsPage.class, appController.getClazz());
+
+appController.getProperty().remove(AMParams.ATTEMPT_STATE);
   }
 
 }
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestMapTask.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestMapTask.java
index fef179994f09..771a5313ec32 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestMapTask.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestMapTask.java
@@ -32,6 +32,7 @@ import org.apache.hadoop.mapreduce.TaskCounter;
 import org.apache.hadoop.mapreduce.TaskType;
 import org.apache.hadoop.util.Progress;
 import org.junit.After;
+import org.junit.Before;
 import org.junit.Assert;
 import org.junit.Rule;
 import org.junit.Test;
@@ -47,14 +48,21 @@ import static org.mockito.Mockito.doReturn;
 import static org.mockito.Mockito.mock;
 
 public class TestMapTask {
-  private static File TEST_ROOT_DIR = new File(
+  private static File testRootDir = new File(
   System.getProperty("test.build.data",
   System.getProperty("java.io.tmpdir", "/tmp")),
   TestMapTask.class.getName());
 
+  @Before
+  public void setup() throws Exception {
+if(!testRootDir.exists()) {
+  testRootDir.mkdirs();
+}
+  }
+
   @After
   public void cleanup() throws Exception {
-FileUtil.fullyDelete(TEST_ROOT_DIR);
+FileUtil.fullyDelete(testRootDir);
   }
 
   @Rule
@@ -66,7 +74,7 @@ public class TestMapTask {
   public void testShufflePermissions() throws Exception {
 JobConf conf = new JobConf();
 conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY, "077");
-conf.set(MRConfig.LOCAL_DIR, TEST_ROOT_DIR.getAbsolutePath());
+conf.set(MRConfig.LOCAL_DIR, testRootDir.getAbsolutePath());
 MapOutputFile mof = new MROutputFiles();
 mof.setConf(conf);
 TaskAttemptID attemptId = new TaskAttemptID("12345", 1, TaskType.MAP, 1, 
1);
@@ -98,7 +106,7 @@ public class TestMapTask {
   public void testSpillFilesCountLimitInvalidValue() throws Exception {
 JobConf conf = new JobConf();
 conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY, "077");
-conf.set(MRConfig.LOCAL_DIR, TEST_ROOT_DIR.getAbsolutePath());
+conf.set(MRConfig.LOCAL_DIR, testRootDir.getAbsolutePath());
 conf.setInt(MRJobConfig.SPILL_FILES_COUNT_LIMIT, -2);
 MapOutputFile mof = new MROutputFiles();
 mof.setConf(conf);
@@ -124,7 +132,7 @@ public class TestMapTask {
   public void testSpillFilesCountBreach() throws Exception {
 JobConf conf = new JobConf();
 conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY, "077");
-conf.set(MRConfig.LOCAL_DIR, TEST_ROOT_DIR.getAbsolutePath());
+conf.set(MRConfig.LOCAL_DIR, testRootDir.getAbsolutePath());
 conf.setInt(MRJobConfig.SPILL_FILES_COU

(hadoop) branch branch-3.3 updated: HADOOP-19172. S3A: upgrade AWS v1 sdk to 1.12.720 (#6823)

2024-05-16 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 5b1346fb888c HADOOP-19172. S3A: upgrade AWS v1 sdk to 1.12.720 (#6823)
5b1346fb888c is described below

commit 5b1346fb888c6997dd8d79e72948a337e61ef685
Author: Steve Loughran 
AuthorDate: Thu May 16 15:00:34 2024 +0100

HADOOP-19172. S3A: upgrade AWS v1 sdk to 1.12.720 (#6823)


Contributed by Steve Loughran
---
 LICENSE-binary | 2 +-
 hadoop-project/pom.xml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index 43866bce657f..890f1a75f38d 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -214,7 +214,7 @@ com.aliyun:aliyun-java-sdk-kms:2.11.0
 com.aliyun:aliyun-java-sdk-ram:3.1.0
 com.aliyun:aliyun-java-sdk-sts:3.0.0
 com.aliyun.oss:aliyun-sdk-oss:3.13.0
-com.amazonaws:aws-java-sdk-bundle:1.12.499
+com.amazonaws:aws-java-sdk-bundle:1.12.720
 com.cedarsoftware:java-util:1.9.0
 com.cedarsoftware:json-io:2.5.1
 com.fasterxml.jackson.core:jackson-annotations:2.12.7
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index f9158e833fc3..13318c07cb85 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -191,7 +191,7 @@
 1.3.1
 1.0-beta-1
 900
-1.12.499
+1.12.720
 2.7.1
 1.11.2
 2.1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch branch-3.4 updated: HADOOP-19172. S3A: upgrade AWS v1 sdk to 1.12.720 (#6823)

2024-05-15 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
 new 707246f8d7cf HADOOP-19172. S3A: upgrade AWS v1 sdk to 1.12.720 (#6823)
707246f8d7cf is described below

commit 707246f8d7cf7d43554df8f4ddf8e60ef9ab9ad1
Author: Steve Loughran 
AuthorDate: Wed May 15 14:40:39 2024 +0100

HADOOP-19172. S3A: upgrade AWS v1 sdk to 1.12.720 (#6823)

+remove reference in LICENSE-binary as it is no longer shipped

Contributed by Steve Loughran

Change-Id: I0c17fdfe7d6e73114760c638f7149f5fd3d986ed
---
 LICENSE-binary | 1 -
 hadoop-project/pom.xml | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index fb910908c0de..8f73a5def8d9 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -216,7 +216,6 @@ com.aliyun:aliyun-java-sdk-kms:2.11.0
 com.aliyun:aliyun-java-sdk-ram:3.1.0
 com.aliyun:aliyun-java-sdk-sts:3.0.0
 com.aliyun.oss:aliyun-sdk-oss:3.13.2
-com.amazonaws:aws-java-sdk-bundle:1.12.599
 com.cedarsoftware:java-util:1.9.0
 com.cedarsoftware:json-io:2.5.1
 com.fasterxml.jackson.core:jackson-annotations:2.12.7
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 316c59bae7f2..889f8c94b47c 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -186,7 +186,7 @@
 1.3.1
 1.0-beta-1
 900
-1.12.599
+1.12.720
 2.24.6
 1.0.1
 2.7.1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



(hadoop) branch trunk updated: HADOOP-19172. S3A: upgrade AWS v1 sdk to 1.12.720 (#6823)

2024-05-15 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new cfdf1f5e8e08 HADOOP-19172. S3A: upgrade AWS v1 sdk to 1.12.720 (#6823)
cfdf1f5e8e08 is described below

commit cfdf1f5e8e0826f10d746cb18baf766cd1e5a080
Author: Steve Loughran 
AuthorDate: Wed May 15 14:40:39 2024 +0100

HADOOP-19172. S3A: upgrade AWS v1 sdk to 1.12.720 (#6823)


+remove reference in LICENSE-binary as it is no longer shipped

Contributed by Steve Loughran
---
 LICENSE-binary | 1 -
 hadoop-project/pom.xml | 2 +-
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/LICENSE-binary b/LICENSE-binary
index ddce4209cc50..8e2c57b1032b 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -216,7 +216,6 @@ com.aliyun:aliyun-java-sdk-kms:2.11.0
 com.aliyun:aliyun-java-sdk-ram:3.1.0
 com.aliyun:aliyun-java-sdk-sts:3.0.0
 com.aliyun.oss:aliyun-sdk-oss:3.13.2
-com.amazonaws:aws-java-sdk-bundle:1.12.599
 com.cedarsoftware:java-util:1.9.0
 com.cedarsoftware:json-io:2.5.1
 com.fasterxml.jackson.core:jackson-annotations:2.12.7
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 945360a9446e..58576b428799 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -186,7 +186,7 @@
 1.3.1
 1.0-beta-1
 900
-1.12.599
+1.12.720
 2.24.6
 1.0.1
 2.7.1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >