Re: [VOTE] Release Apache Hadoop 3.2.4 - RC0
Hi developers, I'm still waiting for your vote. I'm considering the intermittent test failures mentioned by Chris are not blocker. Please file a JIRA and let me know if you find a blocker issue. I will appreciate your help for the release process. Regards, Masatake Iwasaki On 2022/07/20 14:50, Masatake Iwasaki wrote: TestServiceAM I can see the reported failure of TestServiceAM in some "Apache Hadoop qbt Report: branch-3.2+JDK8 on Linux/x86_64". 3.3.0 and above might be fixed by YARN-8867 which added guard using GenericTestUtils#waitFor for stabilizing the testContainersReleasedWhenPreLaunchFails. YARN 8867 did not modified other code under hadoop-yarn-services. If it is the case, TestServiceAM can be tagged as flaky in branch-3.2. On 2022/07/20 14:21, Masatake Iwasaki wrote: Thanks for testing the RC0, Chris. The following are new test failures for me on 3.2.4: * TestAMRMProxy * TestFsck * TestSLSStreamAMSynth * TestServiceAM I could not reproduce the test failures on my local. For TestFsck, if the failed test case is testFsckListCorruptSnapshotFiles, cherry-picking HDFS-15038 (fixing only test code) could be the fix. The failure of TestSLSStreamAMSynth looks frequently reported by "Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64". It could be tagged as known flaky test. On 2022/07/20 9:15, Chris Nauroth wrote: -0 (binding) * Verified all checksums. * Verified all signatures. * Built from source, including native code on Linux. * mvn clean package -Pnative -Psrc -Drequire.openssl -Drequire.snappy -Drequire.zstd -DskipTests * Tests mostly passed, but see below. * mvn --fail-never clean test -Pnative -Dparallel-tests -Drequire.snappy -Drequire.zstd -Drequire.openssl -Dsurefire.rerunFailingTestsCount=3 -DtestsThreadCount=8 The following are new test failures for me on 3.2.4: * TestAMRMProxy * TestFsck * TestSLSStreamAMSynth * TestServiceAM The following tests also failed, but they also fail for me on 3.2.3, so they aren't likely to be related to this release candidate: * TestCapacitySchedulerNodeLabelUpdate * TestFrameworkUploader * TestSLSGenericSynth * TestSLSRunner * test_libhdfs_threaded_hdfspp_test_shim_static I'm not voting a full -1, because I haven't done any root cause analysis on these new test failures. I don't know if it's a quirk to my environment, though I'm using the start-build-env.sh Docker container, so any build dependencies should be consistent. I'd be comfortable moving ahead if others are seeing these tests pass. Chris Nauroth On Thu, Jul 14, 2022 at 7:57 AM Masatake Iwasaki wrote: +1 from myself. * skimmed the contents of site documentation. * built the source tarball on Rocky Linux 8 (x86_64) by OpenJDK 8 with `-Pnative`. * launched pseudo distributed cluster including kms and httpfs with Kerberos and SSL enabled. * created encryption zone, put and read files via httpfs. * ran example MR wordcount over encryption zone. * launched 3-node docker cluster with NN-HA and RM-HA enabled and ran some example MR jobs. * built HBase 2.4.11, Hive 3.1.2 and Spark 3.1.2 against Hadoop 3.2.4 RC0 on CentOS 7 (x86_64) by using Bigtop branch-3.1 and ran smoke-tests. https://github.com/apache/bigtop/pull/942 * Hive needs updating exclusion rule to address HADOOP-18088 (migration to reload4j). * built Spark 3.3.0 against Hadoop 3.2.4 RC0 using the staging repository:: staged staged-releases https://repository.apache.org/content/repositories/orgapachehadoop-1354 true true Thanks, Masatake Iwasaki On 2022/07/13 1:14, Masatake Iwasaki wrote: Hi all, Here's Hadoop 3.2.4 release candidate #0: The RC is available at: https://home.apache.org/~iwasakims/hadoop-3.2.4-RC0/ The RC tag is at: https://github.com/apache/hadoop/releases/tag/release-3.2.4-RC0 The Maven artifacts are staged at: https://repository.apache.org/content/repositories/orgapachehadoop-1354 You can find my public key at: https://downloads.apache.org/hadoop/common/KEYS Please evaluate the RC and vote. The vote will be open for (at least) 5 days. Thanks, Masatake Iwasaki - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail:
[jira] [Resolved] (HADOOP-17948) JAR in conflict with timestamp check causes AM errors
[ https://issues.apache.org/jira/browse/HADOOP-17948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth resolved HADOOP-17948. Resolution: Duplicate I'm closing this as a duplicate of YARN-3606. I don't expect there will be changes made in this area, because the timestamp check has worked well for a fast and lightweight check of unexpected resource changes. YARN-3606 contains more discussion. > JAR in conflict with timestamp check causes AM errors > - > > Key: HADOOP-17948 > URL: https://issues.apache.org/jira/browse/HADOOP-17948 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.9.2 >Reporter: Michael Taylor >Priority: Blocker > > After an init action pulls down a new JAR and the check of a JAR's timestamp > is performed [1]we can sometimes cause an incorrect error if the timestamp > does not match. In order to address this you can perform workarounds like: > record old timestamp at the beginning before the connector is changed > local -r old_file_time=$(date -r ${dataproc_common_lib_dir}/gcs-connector.jar > "+%m%d%H%M.00") > # at end of script. > touch -t "${old_file_time}" > touch -h -t "${old_file_time}" > We should instead of checking the date be comparing version compatibility > tests. > > > 1. > https://github.com/apache/hadoop/blob/release-2.7.3-RC2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java#L255-L258 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/ [Jul 19, 2022, 5:15:59 AM] (noreply) HDFS-1. Pass CMake args for Windows in pom.xml (#4574) [Jul 19, 2022, 4:07:22 PM] (noreply) HDFS-16464. Create only libhdfspp static libraries for Windows (#4571) [Jul 19, 2022, 4:09:06 PM] (noreply) HDFS-16665. Fix duplicate sources for HDFS test (#4573) -1 overall The following subsystems voted -1: blanks pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml Failed junit tests : hadoop.hdfs.TestRollingUpgrade cc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/artifact/out/results-compile-cc-root.txt [96K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/artifact/out/results-compile-javac-root.txt [540K] blanks: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/artifact/out/blanks-eol.txt [13M] https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/artifact/out/blanks-tabs.txt [2.0M] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/artifact/out/results-checkstyle-root.txt [14M] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/artifact/out/results-pathlen.txt [16K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/artifact/out/results-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/artifact/out/results-shellcheck.txt [28K] xml: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/artifact/out/xml.txt [24K] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/artifact/out/results-javadoc-javadoc-root.txt [400K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/927/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [2.2M] Powered by Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch-3.2+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.2-java8-linux-x86_64/56/ No changes - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18353) HEAD OBJECT returns only 400 BAD REQUEST when token is expired
Mukund Thakur created HADOOP-18353: -- Summary: HEAD OBJECT returns only 400 BAD REQUEST when token is expired Key: HADOOP-18353 URL: https://issues.apache.org/jira/browse/HADOOP-18353 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.3.3 Reporter: Mukund Thakur I tried reproducing this today by changing this test [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATemporaryCredentials.java#L116] . Getting a session token for 15 mins and trying every one mins and finally it fails after 15 mins. Looks like the AWS SDK is not having the Expired Token error message as I could see the same in access logs but I see BadRequest on the SDK logs. *S3A Connector logs with SDK debug enabled.* 2022-07-13 15:44:15,318 [JUnit-testSTS] DEBUG s3a.AWSCredentialProviderList (AWSCredentialProviderList.java:getCredentials(184)) - Using credentials from TemporaryAWSCredentialsProvider 2022-07-13 15:44:15,319 [JUnit-testSTS] DEBUG amazonaws.request (AmazonHttpClient.java:executeOneRequest(1285)) - Sending Request: HEAD [https://mthakur-us-west-1.s3.us-west-1.amazonaws.com|https://mthakur-us-west-1.s3.us-west-1.amazonaws.com/] /test/testSTS/040112e1-d954-46d9-9def-aedd297bd42e Headers: (amz-sdk-invocation-id: 41e6e504-1c2b-2701-09bb-ae692dff2515, Content-Type: application/octet-stream, Referer: [https://audit.example.org/hadoop/1/op_create/ca2778f8-085e-4d1f-aef3-73794869f275-0098/?op=op_create=test/testSTS/040112e1-d954-46d9-9def-aedd297bd42e=mthakur=46c6d232-80aa-4405-9e39-5df880932fdc=ca2778f8-085e-4d1f-aef3-73794869f275-0098=11=ca2778f8-085e-4d1f-aef3-73794869f275=11=1657745055318], User-Agent: Hadoop 3.4.0-SNAPSHOT, aws-sdk-java/1.12.132 Mac_OS_X/10.15.7 Java_HotSpot(TM)_64-Bit_Server_VM/25.161-b12 java/1.8.0_161 kotlin/1.4.10 vendor/Oracle_Corporation cfg/retry-mode/legacy, ) 2022-07-13 15:44:15,623 [JUnit-testSTS] DEBUG amazonaws.request (AmazonHttpClient.java:handleErrorResponse(1846)) - \{*}Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID{*}: WMGQ0KC4MHEMZTQC; S3 Extended Request ID: IztdwNq71aWBYavfaj8rV5b/Y0GzV4tqJBEVDSdZH+RRR3B1vUVIMV0qWez9ulBrjDM1GQxeT1Q=; Proxy: null), S3 Extended Request ID: IztdwNq71aWBYavfaj8rV5b/Y0GzV4tqJBEVDSdZH+RRR3B1vUVIMV0qWez9ulBrjDM1GQxeT1Q= 2022-07-13 15:44:15,624 [JUnit-testSTS] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:close(3814)) - Filesystem s3a://mthakur-us-west-1 is closed *AWS access logs* 183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a4235ef8 mthakur-us-west-1 [13/Jul/2022:20:44:15 +] 67.79.115.98 - WMGQ0KC4MHEMZTQC REST.HEAD.OBJECT test/testSTS/040112e1-d954-46d9-9def-aedd297bd42e "HEAD /test/testSTS/040112e1-d954-46d9-9def-aedd297bd42e HTTP/1.1" *400 ExpiredToken* 556 - 5 - "[https://audit.example.org/hadoop/1/op_create/ca2778f8-085e-4d1f-aef3-73794869f275-0098/?op=op_create=test/testSTS/040112e1-d954-46d9-9def-aedd297bd42e=mthakur=46c6d232-80aa-4405-9e39-5df880932fdc=ca2778f8-085e-4d1f-aef3-73794869f275-0098=11=ca2778f8-085e-4d1f-aef3-73794869f275=11=1657745055318]; "Hadoop 3.4.0-SNAPSHOT, aws-sdk-java/1.12.132 Mac_OS_X/10.15.7 Java_HotSpot(TM)_64-Bit_Server_VM/25.161-b12 java/1.8.0_161 kotlin/1.4.10 vendor/Oracle_Corporation cfg/retry-mode/legacy" - IztdwNq71aWBYavfaj8rV5b/Y0GzV4tqJBEVDSdZH+RRR3B1vUVIMV0qWez9ulBrjDM1GQxeT1Q= SigV4 ECDHE-RSA-AES128-SHA AuthHeader [mthakur-us-west-1.s3.us-west-1.amazonaws.com|http://mthakur-us-west-1.s3.us-west-1.amazonaws.com/] TLSv1.2 - I tested by running repeatedly ITestCustomSigner in S3A, and also just ListObjectsV2 on loop… I did just notice your test is failing with HEAD, and *I can reproduce* by running this after credential expiry. aws s3api head-object --bucket djonesoa-us-west-2 --region us-west-2 --key test-object –debug To summarise: * If I run ListObjectsV2, I get “400 ExpiredToken”{+}{+}{+}{+} * If I run HeadObject, I get “400 Bad Request”{+}{+}{+}{+} * If I run GetObject, I get “400 ExpiredToken” -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18352) Support AWS SSO for providing credentials to S3A
Daniel Carl Jones created HADOOP-18352: -- Summary: Support AWS SSO for providing credentials to S3A Key: HADOOP-18352 URL: https://issues.apache.org/jira/browse/HADOOP-18352 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.3.3 Reporter: Daniel Carl Jones HADOOP-18073 was opened regarding upgrading AWS SDK to V2 which supports a credential provider for AWS SSO. Opening this ticket to track that feature explicitly separately from the SDK upgrade. Related SDK issue for AWS SSO support in AWS SDK for Java V1: https://github.com/aws/aws-sdk-java/issues/2434 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18351) Error logging during reads
Ahmar Suhail created HADOOP-18351: - Summary: Error logging during reads Key: HADOOP-18351 URL: https://issues.apache.org/jira/browse/HADOOP-18351 Project: Hadoop Common Issue Type: Sub-task Reporter: Ahmar Suhail Look at how errors during read are logged, current implementation could flood logs with stack traces on failures. proposed * errors in prefetch only logged at info with error text but not stack * full stack logged at debug but: we do want the most recent failure to be raised on the next read() on the stream when there is no data in the cache. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/ No changes -1 overall The following subsystems voted -1: asflicense hadolint mvnsite pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.fs.TestFileUtil hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat hadoop.hdfs.server.federation.router.TestRouterQuota hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver hadoop.hdfs.server.federation.resolver.order.TestLocalResolver hadoop.yarn.server.resourcemanager.TestClientRMService hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter hadoop.mapreduce.lib.input.TestLineRecordReader hadoop.mapred.TestLineRecordReader hadoop.mapreduce.v2.app.TestRuntimeEstimators hadoop.tools.TestDistCpSystem hadoop.yarn.sls.TestSLSRunner hadoop.resourceestimator.solver.impl.TestLpSolver hadoop.resourceestimator.service.TestResourceEstimatorService cc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/diff-compile-javac-root.txt [488K] checkstyle: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/diff-checkstyle-root.txt [14M] hadolint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-mvnsite-root.txt [568K] pathlen: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/pathlen.txt [12K] pylint: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/diff-patch-shellcheck.txt [72K] whitespace: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/whitespace-eol.txt [12M] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/whitespace-tabs.txt [1.3M] javadoc: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-javadoc-root.txt [40K] unit: https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [220K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [428K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [16K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt [36K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt [20K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [116K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [104K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [44K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt [28K] https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/728/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt [20K]
[jira] [Created] (HADOOP-18350) support for hadoop-aws with aws-java-sdk-bundle with version greater than 1.12.220
Bilna created HADOOP-18350: -- Summary: support for hadoop-aws with aws-java-sdk-bundle with version greater than 1.12.220 Key: HADOOP-18350 URL: https://issues.apache.org/jira/browse/HADOOP-18350 Project: Hadoop Common Issue Type: Wish Components: hadoop-thirdparty Reporter: Bilna There are many CVE is listed from aws-java-sdk-bundle with version 1.11.375 and the fix is available in versions higher than 1.12.220. It will be great if we have a hadoop-aws with aws-java-sdk-bundle.jar with latest version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18349) Support EKS for IAM service account
vikas koppineedi created HADOOP-18349: - Summary: Support EKS for IAM service account Key: HADOOP-18349 URL: https://issues.apache.org/jira/browse/HADOOP-18349 Project: Hadoop Common Issue Type: New Feature Reporter: vikas koppineedi Unable to use for authenticating AWS. *AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/[eks.amazonaws.com/serviceaccount/token|http://eks.amazonaws.com/serviceaccount/token]* -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org