[jira] [Commented] (HADOOP-18342) Upgrade to Avro 1.11.1

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816277#comment-17816277
 ] 

ASF GitHub Bot commented on HADOOP-18342:
-

hadoop-yetus commented on PR #4854:
URL: https://github.com/apache/hadoop/pull/4854#issuecomment-1936868233

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  8s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  19m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  20m 14s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 18s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   2m 11s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | +0 :ok: |  spotbugs  |   0m 20s |  |  
branch/hadoop-client-modules/hadoop-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 20s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |  30m 22s | 
[/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/branch-spotbugs-root-warnings.html)
 |  root in trunk has 5 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  66m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |  33m 48s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  mvninstall  |   1m  6s | 
[/patch-mvninstall-hadoop-mapreduce-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/patch-mvninstall-hadoop-mapreduce-project.txt)
 |  hadoop-mapreduce-project in the patch failed.  |
   | -1 :x: |  compile  |  16m 20s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |  16m 20s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |  15m 26s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  javac  |  15m 26s | 

Re: [PR] HADOOP-18342: shaded avro jar [hadoop]

2024-02-09 Thread via GitHub


hadoop-yetus commented on PR #4854:
URL: https://github.com/apache/hadoop/pull/4854#issuecomment-1936868233

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  8s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  19m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  20m 14s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 18s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   2m 11s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | +0 :ok: |  spotbugs  |   0m 20s |  |  
branch/hadoop-client-modules/hadoop-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 20s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |  30m 22s | 
[/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/branch-spotbugs-root-warnings.html)
 |  root in trunk has 5 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  66m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |  33m 48s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  mvninstall  |   1m  6s | 
[/patch-mvninstall-hadoop-mapreduce-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/patch-mvninstall-hadoop-mapreduce-project.txt)
 |  hadoop-mapreduce-project in the patch failed.  |
   | -1 :x: |  compile  |  16m 20s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |  16m 20s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |  15m 26s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  javac  |  15m 26s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/4/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 17s | 

[jira] [Commented] (HADOOP-19057) S3 public test bucket landsat-pds unreadable -needs replacement

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816271#comment-17816271
 ] 

ASF GitHub Bot commented on HADOOP-19057:
-

virajjasani commented on PR #6515:
URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1936824678

   @ahmarsuhail are you aware of any criteria used by Amazon to recycle  public 
buckets?




> S3 public test bucket landsat-pds unreadable -needs replacement
> ---
>
> Key: HADOOP-19057
> URL: https://issues.apache.org/jira/browse/HADOOP-19057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0, 3.2.4, 3.3.9, 3.3.6, 3.5.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>  Labels: pull-request-available
>
> The s3 test bucket used in hadoop-aws tests of S3 select and large file reads 
> is no longer publicly accessible
> {code}
> java.nio.file.AccessDeniedException: landsat-pds: getBucketMetadata() on 
> landsat-pds: software.amazon.awssdk.services.s3.model.S3Exception: null 
> (Service: S3, Status Code: 403, Request ID: 06QNYQ9GND5STQ2S, Extended 
> Request ID: 
> O+u2Y1MrCQuuSYGKRAWHj/5LcDLuaFS8owNuXXWSJ0zFXYfuCaTVLEP351S/umti558eKlUqV6U=):null
> {code}
> * Because HADOOP-18830 has cut s3 select, all we need in 3.4.1+ is a large 
> file for some reading tests
> * changing the default value disables s3 select tests on older releases
> * if fs.s3a.scale.test.csvfile is set to " " then other tests which need it 
> will be skipped
> Proposed
> * we locate a new large file under the (requester pays) s3a://usgs-landsat/ 
> bucket . All releases with HADOOP-18168 can use this
> * update 3.4.1 source to use this; document it
> * do something similar for 3.3.9 + maybe even cut s3 select there too.
> * document how to use it on older releases with requester-pays support
> * document how to completely disable it on older releases.
> h2. How to fix (most) landsat test failures on older releases
> add this to your auth-keys.xml file. Expect some failures in a few tests 
> with-hardcoded references to the bucket (assumed role delegation tokens)
> {code}
>   
> fs.s3a.scale.test.csvfile
> s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz
> file used in scale tests
>   
>   
> fs.s3a.bucket.noaa-cors-pds.endpoint.region
> us-east-1
>   
>   
> fs.s3a.bucket.noaa-isd-pds.multipart.purge
> false
> Don't try to purge uploads in the read-only bucket, as
> it will only create log noise.
>   
>   
> fs.s3a.bucket.noaa-isd-pds.probe
> 0
> Let's postpone existence checks to the first IO operation 
> 
>   
>   
> fs.s3a.bucket.noaa-isd-pds.audit.add.referrer.header
> false
> Do not add the referrer header
>   
>   
> fs.s3a.bucket.noaa-isd-pds.prefetch.block.size
> 128k
> Use a small prefetch size so tests fetch multiple 
> blocks
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19057. S3A: Landsat bucket used in tests no longer accessible [hadoop]

2024-02-09 Thread via GitHub


virajjasani commented on PR #6515:
URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1936824678

   @ahmarsuhail are you aware of any criteria used by Amazon to recycle  public 
buckets?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19057) S3 public test bucket landsat-pds unreadable -needs replacement

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816269#comment-17816269
 ] 

ASF GitHub Bot commented on HADOOP-19057:
-

virajjasani commented on PR #6515:
URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1936823636

   Shall we not use requester pay public bucket for all landsat usages?




> S3 public test bucket landsat-pds unreadable -needs replacement
> ---
>
> Key: HADOOP-19057
> URL: https://issues.apache.org/jira/browse/HADOOP-19057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0, 3.2.4, 3.3.9, 3.3.6, 3.5.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>  Labels: pull-request-available
>
> The s3 test bucket used in hadoop-aws tests of S3 select and large file reads 
> is no longer publicly accessible
> {code}
> java.nio.file.AccessDeniedException: landsat-pds: getBucketMetadata() on 
> landsat-pds: software.amazon.awssdk.services.s3.model.S3Exception: null 
> (Service: S3, Status Code: 403, Request ID: 06QNYQ9GND5STQ2S, Extended 
> Request ID: 
> O+u2Y1MrCQuuSYGKRAWHj/5LcDLuaFS8owNuXXWSJ0zFXYfuCaTVLEP351S/umti558eKlUqV6U=):null
> {code}
> * Because HADOOP-18830 has cut s3 select, all we need in 3.4.1+ is a large 
> file for some reading tests
> * changing the default value disables s3 select tests on older releases
> * if fs.s3a.scale.test.csvfile is set to " " then other tests which need it 
> will be skipped
> Proposed
> * we locate a new large file under the (requester pays) s3a://usgs-landsat/ 
> bucket . All releases with HADOOP-18168 can use this
> * update 3.4.1 source to use this; document it
> * do something similar for 3.3.9 + maybe even cut s3 select there too.
> * document how to use it on older releases with requester-pays support
> * document how to completely disable it on older releases.
> h2. How to fix (most) landsat test failures on older releases
> add this to your auth-keys.xml file. Expect some failures in a few tests 
> with-hardcoded references to the bucket (assumed role delegation tokens)
> {code}
>   
> fs.s3a.scale.test.csvfile
> s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz
> file used in scale tests
>   
>   
> fs.s3a.bucket.noaa-cors-pds.endpoint.region
> us-east-1
>   
>   
> fs.s3a.bucket.noaa-isd-pds.multipart.purge
> false
> Don't try to purge uploads in the read-only bucket, as
> it will only create log noise.
>   
>   
> fs.s3a.bucket.noaa-isd-pds.probe
> 0
> Let's postpone existence checks to the first IO operation 
> 
>   
>   
> fs.s3a.bucket.noaa-isd-pds.audit.add.referrer.header
> false
> Do not add the referrer header
>   
>   
> fs.s3a.bucket.noaa-isd-pds.prefetch.block.size
> 128k
> Use a small prefetch size so tests fetch multiple 
> blocks
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19057. S3A: Landsat bucket used in tests no longer accessible [hadoop]

2024-02-09 Thread via GitHub


virajjasani commented on PR #6515:
URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1936823636

   Shall we not use requester pay public bucket for all landsat usages?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19059) S3A: update AWS SDK to 2.23.19 to support S3 Access Grants

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816267#comment-17816267
 ] 

ASF GitHub Bot commented on HADOOP-19059:
-

adnanhemani commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1936813337

   @steveloughran please go ahead and close (I would, but GH doesn't allow me 
to)




> S3A: update AWS SDK to 2.23.19 to support S3 Access Grants
> --
>
> Key: HADOOP-19059
> URL: https://issues.apache.org/jira/browse/HADOOP-19059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.1
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In order to support S3 Access 
> Grants(https://aws.amazon.com/s3/features/access-grants/) in S3A, we need to 
> update AWS SDK in hadooop package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19059. Update AWS SDK to v2.23.7 [hadoop]

2024-02-09 Thread via GitHub


adnanhemani commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1936813337

   @steveloughran please go ahead and close (I would, but GH doesn't allow me 
to)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19071) Update maven-surefire-plugin from 3.0.0 to 3.2.5

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816266#comment-17816266
 ] 

ASF GitHub Bot commented on HADOOP-19071:
-

hadoop-yetus commented on PR #6545:
URL: https://github.com/apache/hadoop/pull/6545#issuecomment-1936809566

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m  6s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  32m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  32m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 17s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 116m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6545/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6545 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs compile javac javadoc mvninstall shadedclient xmllint |
   | uname | Linux ed40ad4018bd 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 
13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / 1f110c1da42f607b2454a13d68074dc88879874b |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6545/1/testReport/ |
   | Max. process+thread count | 551 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6545/1/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Update 

Re: [PR] HADOOP-19071. Update maven-surefire-plugin from 3.0.0 to 3.2.5. [hadoop]

2024-02-09 Thread via GitHub


hadoop-yetus commented on PR #6545:
URL: https://github.com/apache/hadoop/pull/6545#issuecomment-1936809566

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m  6s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  32m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  32m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 17s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 116m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6545/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6545 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs compile javac javadoc mvninstall shadedclient xmllint |
   | uname | Linux ed40ad4018bd 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 
13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / 1f110c1da42f607b2454a13d68074dc88879874b |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6545/1/testReport/ |
   | Max. process+thread count | 551 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6545/1/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about 

[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816263#comment-17816263
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1936795463

   Hi @ahmarsuhail, I've made all changes requested. Please take a look when 
you can.




> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-02-09 Thread via GitHub


adnanhemani commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1936795463

   Hi @ahmarsuhail, I've made all changes requested. Please take a look when 
you can.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816261#comment-17816261
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

hadoop-yetus commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1936789090

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 57s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 141m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6544 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 21c7753b925c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3e151bda68b2c6bee7f089f96431b576a6cf9143 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/2/testReport/ |
   | Max. process+thread count | 612 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Add S3 

Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-02-09 Thread via GitHub


hadoop-yetus commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1936789090

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 57s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 141m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6544 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 21c7753b925c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3e151bda68b2c6bee7f089f96431b576a6cf9143 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/2/testReport/ |
   | Max. process+thread count | 612 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this 

Re: [PR] HADOOP-19071. Update maven-surefire-plugin from 3.0.0 to 3.2.5. [hadoop]

2024-02-09 Thread via GitHub


hadoop-yetus commented on PR #6545:
URL: https://github.com/apache/hadoop/pull/6545#issuecomment-1936757691

   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6545/1/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19071) Update maven-surefire-plugin from 3.0.0 to 3.2.5

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816256#comment-17816256
 ] 

ASF GitHub Bot commented on HADOOP-19071:
-

slfan1989 opened a new pull request, #6545:
URL: https://github.com/apache/hadoop/pull/6545

   
   
   ### Description of PR
   
   JIRA: HADOOP-19071. Update maven-surefire-plugin from 3.0.0 to 3.2.5.
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Update maven-surefire-plugin from 3.0.0 to 3.2.5  
> -
>
> Key: HADOOP-19071
> URL: https://issues.apache.org/jira/browse/HADOOP-19071
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, common
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19071. Update maven-surefire-plugin from 3.0.0 to 3.2.5. [hadoop]

2024-02-09 Thread via GitHub


slfan1989 opened a new pull request, #6545:
URL: https://github.com/apache/hadoop/pull/6545

   
   
   ### Description of PR
   
   JIRA: HADOOP-19071. Update maven-surefire-plugin from 3.0.0 to 3.2.5.
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19071) Update maven-surefire-plugin from 3.0.0 to 3.2.5

2024-02-09 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-19071:

Summary: Update maven-surefire-plugin from 3.0.0 to 3.2.5 (was: Update 
maven-surefire-plugin from 3.0.0 to 3.2.2)

> Update maven-surefire-plugin from 3.0.0 to 3.2.5  
> -
>
> Key: HADOOP-19071
> URL: https://issues.apache.org/jira/browse/HADOOP-19071
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, common
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19071) Update maven-surefire-plugin from 3.0.0 to 3.2.2

2024-02-09 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-19071:

Target Version/s: 3.4.0, 3.5.0  (was: 3.5.0)

> Update maven-surefire-plugin from 3.0.0 to 3.2.2  
> -
>
> Key: HADOOP-19071
> URL: https://issues.apache.org/jira/browse/HADOOP-19071
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, common
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19071) Update maven-surefire-plugin from 3.0.0 to 3.2.2

2024-02-09 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-19071:

Affects Version/s: 3.4.0

> Update maven-surefire-plugin from 3.0.0 to 3.2.2  
> -
>
> Key: HADOOP-19071
> URL: https://issues.apache.org/jira/browse/HADOOP-19071
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, common
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19071) Update maven-surefire-plugin from 3.0.0 to 3.2.2

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816251#comment-17816251
 ] 

ASF GitHub Bot commented on HADOOP-19071:
-

slfan1989 commented on PR #6537:
URL: https://github.com/apache/hadoop/pull/6537#issuecomment-1936744038

   > +1 for the change on trunk/3.4; let's see what surprises surface...we only 
need to worry about build time issues, not production ones.
   
   I will submit a pull request to trunk3.4 and hopefully everything goes 
smoothly.




> Update maven-surefire-plugin from 3.0.0 to 3.2.2  
> -
>
> Key: HADOOP-19071
> URL: https://issues.apache.org/jira/browse/HADOOP-19071
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, common
>Affects Versions: 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19071. Update maven-surefire-plugin from 3.0.0 to 3.2.5. [hadoop]

2024-02-09 Thread via GitHub


slfan1989 commented on PR #6537:
URL: https://github.com/apache/hadoop/pull/6537#issuecomment-1936744038

   > +1 for the change on trunk/3.4; let's see what surprises surface...we only 
need to worry about build time issues, not production ones.
   
   I will submit a pull request to trunk3.4 and hopefully everything goes 
smoothly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-02-09 Thread via GitHub


adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484871939


##
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md:
##
@@ -614,6 +614,38 @@ If the following property is not set or set to `true`, the 
following exception w
 java.io.IOException: From option fs.s3a.aws.credentials.provider 
java.lang.ClassNotFoundException: Class CustomCredentialsProvider not found
 ```
 
+## S3 Authorization Using S3 Access Grants

Review Comment:
   Think I fixed these. Will recheck the updated Yetus run.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816240#comment-17816240
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484871939


##
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md:
##
@@ -614,6 +614,38 @@ If the following property is not set or set to `true`, the 
following exception w
 java.io.IOException: From option fs.s3a.aws.credentials.provider 
java.lang.ClassNotFoundException: Class CustomCredentialsProvider not found
 ```
 
+## S3 Authorization Using S3 Access Grants

Review Comment:
   Think I fixed these. Will recheck the updated Yetus run.





> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816235#comment-17816235
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484857819


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;
+import software.amazon.awssdk.services.s3.S3BaseClientBuilder;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase {
+/**
+ * This credential provider will be attached to any client
+ * that has been configured with the S3 Access Grants plugin.
+ * {@link 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin}.
+ */
+public static final String 
S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS =
+S3AccessGrantsIdentityProvider.class.getName();
+
+@Test
+public void testS3AccessGrantsEnabled() throws IOException, 
URISyntaxException {
+// Feature is explicitly enabled
+AwsClient s3AsyncClient = getAwsClient(createConfig(true), true);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncClient));
+
+AwsClient s3Client = getAwsClient(createConfig(true), false);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3Client));
+}
+
+@Test
+public void testS3AccessGrantsDisabled() throws IOException, 
URISyntaxException {
+// Disabled by default
+AwsClient s3AsyncDefaultClient = getAwsClient(new Configuration(), 
true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncDefaultClient));
+
+AwsClient s3DefaultClient = getAwsClient(new Configuration(), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3DefaultClient));
+
+// Disabled if explicitly set
+AwsClient s3AsyncExplicitlyDisabledClient = 
getAwsClient(createConfig(false), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncExplicitlyDisabledClient));
+
+AwsClient s3ExplicitlyDisabledClient = 
getAwsClient(createConfig(false), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3ExplicitlyDisabledClient));
+}
+
+private Configuration createConfig(boolean s3agEnabled) {

Review Comment:
   > I think you'll need to do removeBaseAndBucketOverrides here before setting 
the value.
   
   I'm not sure about this because I'm starting a new Hadoop Configuration 
object itself rather than the `createConfiguration` methods that we use from 
the S3ATestUtils. In the end, I don't think it matters - because as long as we 
set the S3 Access Grants properties, that's all that matters to us for the 
purpose of this test, no?
   
   > and is there a way to check for if the IAM fallback is set on the client?
   Unfortunately not :( Did a lot of digging but in short, the plugins are 
"applied" to the client. When we apply the S3 Access Grants plugin on the S3 

Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-02-09 Thread via GitHub


adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484857819


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;
+import software.amazon.awssdk.services.s3.S3BaseClientBuilder;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase {
+/**
+ * This credential provider will be attached to any client
+ * that has been configured with the S3 Access Grants plugin.
+ * {@link 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin}.
+ */
+public static final String 
S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS =
+S3AccessGrantsIdentityProvider.class.getName();
+
+@Test
+public void testS3AccessGrantsEnabled() throws IOException, 
URISyntaxException {
+// Feature is explicitly enabled
+AwsClient s3AsyncClient = getAwsClient(createConfig(true), true);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncClient));
+
+AwsClient s3Client = getAwsClient(createConfig(true), false);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3Client));
+}
+
+@Test
+public void testS3AccessGrantsDisabled() throws IOException, 
URISyntaxException {
+// Disabled by default
+AwsClient s3AsyncDefaultClient = getAwsClient(new Configuration(), 
true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncDefaultClient));
+
+AwsClient s3DefaultClient = getAwsClient(new Configuration(), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3DefaultClient));
+
+// Disabled if explicitly set
+AwsClient s3AsyncExplicitlyDisabledClient = 
getAwsClient(createConfig(false), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncExplicitlyDisabledClient));
+
+AwsClient s3ExplicitlyDisabledClient = 
getAwsClient(createConfig(false), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3ExplicitlyDisabledClient));
+}
+
+private Configuration createConfig(boolean s3agEnabled) {

Review Comment:
   > I think you'll need to do removeBaseAndBucketOverrides here before setting 
the value.
   
   I'm not sure about this because I'm starting a new Hadoop Configuration 
object itself rather than the `createConfiguration` methods that we use from 
the S3ATestUtils. In the end, I don't think it matters - because as long as we 
set the S3 Access Grants properties, that's all that matters to us for the 
purpose of this test, no?
   
   > and is there a way to check for if the IAM fallback is set on the client?
   Unfortunately not :( Did a lot of digging but in short, the plugins are 
"applied" to the client. When we apply the S3 Access Grants plugin on the S3 
clients, we get the following identity provider set as the Credential Provider 
for this client: `S3AccessGrantsIdentityProvider`. And in the case of the 
fallback, the fallback flag is only set on the `S3AccessGrantsIdentityProvider` 
class but as a 

[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816232#comment-17816232
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484852608


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;
+import software.amazon.awssdk.services.s3.S3BaseClientBuilder;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase {
+/**
+ * This credential provider will be attached to any client
+ * that has been configured with the S3 Access Grants plugin.
+ * {@link 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin}.
+ */
+public static final String 
S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS =
+S3AccessGrantsIdentityProvider.class.getName();
+
+@Test
+public void testS3AccessGrantsEnabled() throws IOException, 
URISyntaxException {
+// Feature is explicitly enabled
+AwsClient s3AsyncClient = getAwsClient(createConfig(true), true);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncClient));
+
+AwsClient s3Client = getAwsClient(createConfig(true), false);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3Client));
+}
+
+@Test
+public void testS3AccessGrantsDisabled() throws IOException, 
URISyntaxException {
+// Disabled by default
+AwsClient s3AsyncDefaultClient = getAwsClient(new Configuration(), 
true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncDefaultClient));
+
+AwsClient s3DefaultClient = getAwsClient(new Configuration(), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3DefaultClient));
+
+// Disabled if explicitly set
+AwsClient s3AsyncExplicitlyDisabledClient = 
getAwsClient(createConfig(false), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncExplicitlyDisabledClient));
+
+AwsClient s3ExplicitlyDisabledClient = 
getAwsClient(createConfig(false), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3ExplicitlyDisabledClient));
+}
+
+private Configuration createConfig(boolean s3agEnabled) {
+Configuration conf = new Configuration();
+conf.setBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, s3agEnabled);
+return conf;
+}
+
+private String getCredentialProviderName(AwsClient awsClient) {
+return 
awsClient.serviceClientConfiguration().credentialsProvider().getClass().getName();
+}
+
+private , ClientT> 
AwsClient

Review Comment:
   Yup, changed.





> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
> 

Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-02-09 Thread via GitHub


adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484852608


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;
+import software.amazon.awssdk.services.s3.S3BaseClientBuilder;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase {
+/**
+ * This credential provider will be attached to any client
+ * that has been configured with the S3 Access Grants plugin.
+ * {@link 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin}.
+ */
+public static final String 
S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS =
+S3AccessGrantsIdentityProvider.class.getName();
+
+@Test
+public void testS3AccessGrantsEnabled() throws IOException, 
URISyntaxException {
+// Feature is explicitly enabled
+AwsClient s3AsyncClient = getAwsClient(createConfig(true), true);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncClient));
+
+AwsClient s3Client = getAwsClient(createConfig(true), false);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3Client));
+}
+
+@Test
+public void testS3AccessGrantsDisabled() throws IOException, 
URISyntaxException {
+// Disabled by default
+AwsClient s3AsyncDefaultClient = getAwsClient(new Configuration(), 
true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncDefaultClient));
+
+AwsClient s3DefaultClient = getAwsClient(new Configuration(), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3DefaultClient));
+
+// Disabled if explicitly set
+AwsClient s3AsyncExplicitlyDisabledClient = 
getAwsClient(createConfig(false), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncExplicitlyDisabledClient));
+
+AwsClient s3ExplicitlyDisabledClient = 
getAwsClient(createConfig(false), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3ExplicitlyDisabledClient));
+}
+
+private Configuration createConfig(boolean s3agEnabled) {
+Configuration conf = new Configuration();
+conf.setBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, s3agEnabled);
+return conf;
+}
+
+private String getCredentialProviderName(AwsClient awsClient) {
+return 
awsClient.serviceClientConfiguration().credentialsProvider().getClass().getName();
+}
+
+private , ClientT> 
AwsClient

Review Comment:
   Yup, changed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816231#comment-17816231
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484851522


##
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md:
##
@@ -614,6 +614,38 @@ If the following property is not set or set to `true`, the 
following exception w
 java.io.IOException: From option fs.s3a.aws.credentials.provider 
java.lang.ClassNotFoundException: Class CustomCredentialsProvider not found
 ```
 
+## S3 Authorization Using S3 Access Grants
+
+[S3 Access Grants](https://aws.amazon.com/s3/features/access-grants/) can be 
used to grant accesses to S3 data using IAM Principals.
+In order to enable S3 Access Grants to work with S3A, we enable the 

Review Comment:
   Good call, done!





> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-02-09 Thread via GitHub


adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484851522


##
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md:
##
@@ -614,6 +614,38 @@ If the following property is not set or set to `true`, the 
following exception w
 java.io.IOException: From option fs.s3a.aws.credentials.provider 
java.lang.ClassNotFoundException: Class CustomCredentialsProvider not found
 ```
 
+## S3 Authorization Using S3 Access Grants
+
+[S3 Access Grants](https://aws.amazon.com/s3/features/access-grants/) can be 
used to grant accesses to S3 data using IAM Principals.
+In order to enable S3 Access Grants to work with S3A, we enable the 

Review Comment:
   Good call, done!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816230#comment-17816230
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484848907


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -401,4 +411,19 @@ private static Region getS3RegionFromEndpoint(final String 
endpoint,
 return Region.of(AWS_S3_DEFAULT_REGION);
   }
 
+  private static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+if (!conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false)){
+  LOG_S3AG_ENABLED.debug("S3 Access Grants plugin is not enabled.");
+  return;
+}
+
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is enabled.");
+boolean isFallbackEnabled = 
conf.getBoolean(AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED, false);
+S3AccessGrantsPlugin accessGrantsPlugin =
+
S3AccessGrantsPlugin.builder().enableFallback(isFallbackEnabled).build();
+builder.addPlugin(accessGrantsPlugin);
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is added to S3 client with 
fallback: {}", isFallbackEnabled);

Review Comment:
   Good catch, change.



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -401,4 +411,19 @@ private static Region getS3RegionFromEndpoint(final String 
endpoint,
 return Region.of(AWS_S3_DEFAULT_REGION);
   }
 
+  private static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+if (!conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false)){
+  LOG_S3AG_ENABLED.debug("S3 Access Grants plugin is not enabled.");
+  return;
+}
+
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is enabled.");
+boolean isFallbackEnabled = 
conf.getBoolean(AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED, false);
+S3AccessGrantsPlugin accessGrantsPlugin =
+
S3AccessGrantsPlugin.builder().enableFallback(isFallbackEnabled).build();
+builder.addPlugin(accessGrantsPlugin);
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is added to S3 client with 
fallback: {}", isFallbackEnabled);

Review Comment:
   Good catch, changed.





> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-02-09 Thread via GitHub


adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484848907


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -401,4 +411,19 @@ private static Region getS3RegionFromEndpoint(final String 
endpoint,
 return Region.of(AWS_S3_DEFAULT_REGION);
   }
 
+  private static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+if (!conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false)){
+  LOG_S3AG_ENABLED.debug("S3 Access Grants plugin is not enabled.");
+  return;
+}
+
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is enabled.");
+boolean isFallbackEnabled = 
conf.getBoolean(AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED, false);
+S3AccessGrantsPlugin accessGrantsPlugin =
+
S3AccessGrantsPlugin.builder().enableFallback(isFallbackEnabled).build();
+builder.addPlugin(accessGrantsPlugin);
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is added to S3 client with 
fallback: {}", isFallbackEnabled);

Review Comment:
   Good catch, change.



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -401,4 +411,19 @@ private static Region getS3RegionFromEndpoint(final String 
endpoint,
 return Region.of(AWS_S3_DEFAULT_REGION);
   }
 
+  private static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+if (!conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false)){
+  LOG_S3AG_ENABLED.debug("S3 Access Grants plugin is not enabled.");
+  return;
+}
+
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is enabled.");
+boolean isFallbackEnabled = 
conf.getBoolean(AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED, false);
+S3AccessGrantsPlugin accessGrantsPlugin =
+
S3AccessGrantsPlugin.builder().enableFallback(isFallbackEnabled).build();
+builder.addPlugin(accessGrantsPlugin);
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is added to S3 client with 
fallback: {}", isFallbackEnabled);

Review Comment:
   Good catch, changed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816206#comment-17816206
 ] 

ASF GitHub Bot commented on HADOOP-19047:
-

steveloughran commented on code in PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#discussion_r1484762085


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/CommitConstants.java:
##
@@ -242,6 +242,13 @@ private CommitConstants() {
*/
   public static final int DEFAULT_COMMITTER_THREADS = 32;
 
+
+  public static final String 
FS_S3A_COMMITTER_MAGIC_TRACK_COMMITS_IN_MEMORY_ENABLED =

Review Comment:
   javadocs here and below, use {@value} for value insertion



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -3906,6 +3908,21 @@ public void access(final Path f, final FsAction mode)
   @Retries.RetryTranslated
   public FileStatus getFileStatus(final Path f) throws IOException {
 Path path = qualify(f);
+if (isTrackMagicCommitsInMemoryEnabled(getConf()) && 
isMagicCommitPath(path)) {
+  // Some downstream apps might call getFileStatus for a magic path to get 
the file size.
+  // when commit data is stored in memory construct the dummy 
S3AFileStatus with correct
+  // file size fetched from the memory.
+  if 
(InMemoryMagicCommitTracker.getTaskAttemptIdToBytesWritten().containsKey(path)) 
{
+long len = 
InMemoryMagicCommitTracker.getTaskAttemptIdToBytesWritten().get(path);
+return new S3AFileStatus(len,

Review Comment:
   how about we add a special etag here, like "pending", declared in 
CommitterConstants. That way toString() on the status hints that it is pending 
so there's more diagnostics in list operations etc. And it could be used in 
tests. 



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/InMemoryMagicCommitTracker.java:
##
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.commit.magic;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import software.amazon.awssdk.services.s3.model.CompletedPart;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.WriteOperationHelper;
+import org.apache.hadoop.fs.s3a.commit.files.SinglePendingCommit;
+import org.apache.hadoop.fs.s3a.statistics.PutTrackerStatistics;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.IOStatisticsSnapshot;
+import org.apache.hadoop.util.Preconditions;
+
+import static 
org.apache.hadoop.fs.s3a.commit.magic.MagicCommitTrackerUtils.extractTaskAttemptIdFromPath;
+
+/**
+ * InMemoryMagicCommitTracker stores the commit data in memory.
+ * The commit data and related data stores are flushed out from
+ * the memory when the task is committed or aborted.
+ */
+public class InMemoryMagicCommitTracker extends MagicCommitTracker {
+
+  // stores taskAttemptId to commit data mapping

Review Comment:
   make javadocs



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/InMemoryMagicCommitTracker.java:
##
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations 

Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on code in PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#discussion_r1484762085


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/CommitConstants.java:
##
@@ -242,6 +242,13 @@ private CommitConstants() {
*/
   public static final int DEFAULT_COMMITTER_THREADS = 32;
 
+
+  public static final String 
FS_S3A_COMMITTER_MAGIC_TRACK_COMMITS_IN_MEMORY_ENABLED =

Review Comment:
   javadocs here and below, use {@value} for value insertion



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -3906,6 +3908,21 @@ public void access(final Path f, final FsAction mode)
   @Retries.RetryTranslated
   public FileStatus getFileStatus(final Path f) throws IOException {
 Path path = qualify(f);
+if (isTrackMagicCommitsInMemoryEnabled(getConf()) && 
isMagicCommitPath(path)) {
+  // Some downstream apps might call getFileStatus for a magic path to get 
the file size.
+  // when commit data is stored in memory construct the dummy 
S3AFileStatus with correct
+  // file size fetched from the memory.
+  if 
(InMemoryMagicCommitTracker.getTaskAttemptIdToBytesWritten().containsKey(path)) 
{
+long len = 
InMemoryMagicCommitTracker.getTaskAttemptIdToBytesWritten().get(path);
+return new S3AFileStatus(len,

Review Comment:
   how about we add a special etag here, like "pending", declared in 
CommitterConstants. That way toString() on the status hints that it is pending 
so there's more diagnostics in list operations etc. And it could be used in 
tests. 



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/InMemoryMagicCommitTracker.java:
##
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.commit.magic;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import software.amazon.awssdk.services.s3.model.CompletedPart;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.WriteOperationHelper;
+import org.apache.hadoop.fs.s3a.commit.files.SinglePendingCommit;
+import org.apache.hadoop.fs.s3a.statistics.PutTrackerStatistics;
+import org.apache.hadoop.fs.statistics.IOStatistics;
+import org.apache.hadoop.fs.statistics.IOStatisticsSnapshot;
+import org.apache.hadoop.util.Preconditions;
+
+import static 
org.apache.hadoop.fs.s3a.commit.magic.MagicCommitTrackerUtils.extractTaskAttemptIdFromPath;
+
+/**
+ * InMemoryMagicCommitTracker stores the commit data in memory.
+ * The commit data and related data stores are flushed out from
+ * the memory when the task is committed or aborted.
+ */
+public class InMemoryMagicCommitTracker extends MagicCommitTracker {
+
+  // stores taskAttemptId to commit data mapping

Review Comment:
   make javadocs



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/InMemoryMagicCommitTracker.java:
##
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.commit.magic;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import 

[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816205#comment-17816205
 ] 

ASF GitHub Bot commented on HADOOP-19047:
-

steveloughran commented on code in PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#discussion_r1484756242


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java:
##
@@ -264,9 +326,14 @@ public void abortTask(TaskAttemptContext context) throws 
IOException {
 try (DurationInfo d = new DurationInfo(LOG,
 "Abort task %s", context.getTaskAttemptID());
 CommitContext commitContext = initiateTaskOperation(context)) {
-  getCommitOperations().abortAllSinglePendingCommits(attemptPath,
-  commitContext,
-  true);
+  if (isTrackMagicCommitsInMemoryEnabled(context.getConfiguration())) {
+List pendingCommits = 
loadPendingCommitsFromMemory(context);
+for (SinglePendingCommit singleCommit : pendingCommits) {
+  commitContext.abortSingleCommit(singleCommit);
+}
+  } else {
+getCommitOperations().abortAllSinglePendingCommits(attemptPath, 
commitContext, true);

Review Comment:
   looked at the hadoop stuff and yes, it does it in the same process too
   ```
   this is called from a task's process to clean 
   up a single task's output that can not yet been committed. This may be
   called multiple times for the same task, but for different task attempts.
   ```
   so nothing to worry about
   





> Support InMemory Tracking Of S3A Magic Commits
> --
>
> Key: HADOOP-19047
> URL: https://issues.apache.org/jira/browse/HADOOP-19047
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> The following are the operations which happens within a Task when it uses S3A 
> Magic Committer. 
> *During closing of stream*
> 1. A 0-byte file with a same name of the original file is uploaded to S3 
> using PUT operation. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152]
>  for more information. This is done so that the downstream application like 
> Spark could get the size of the file which is being written.
> 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176]
>  for more information.
> *During TaskCommit*
> 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number 
> of metadata file in S3 if a single task writes to 'x' files) are read and 
> rewritten to S3 as a single metadata file. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201]
>  for more information
> Since these operations happens with the Task JVM, We could optimize as well 
> as save cost by storing these information in memory when Task memory usage is 
> not a constraint. Hence the proposal here is to introduce a new MagicCommit 
> Tracker called "InMemoryMagicCommitTracker" which will store the 
> 1. Metadata of MPU in memory till the Task is committed
> 2. Store the size of the file which can be used by the downstream application 
> to get the file size before it is committed/visible to the output path.
> This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call 
> given a Task writes only 1 file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on code in PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#discussion_r1484756242


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java:
##
@@ -264,9 +326,14 @@ public void abortTask(TaskAttemptContext context) throws 
IOException {
 try (DurationInfo d = new DurationInfo(LOG,
 "Abort task %s", context.getTaskAttemptID());
 CommitContext commitContext = initiateTaskOperation(context)) {
-  getCommitOperations().abortAllSinglePendingCommits(attemptPath,
-  commitContext,
-  true);
+  if (isTrackMagicCommitsInMemoryEnabled(context.getConfiguration())) {
+List pendingCommits = 
loadPendingCommitsFromMemory(context);
+for (SinglePendingCommit singleCommit : pendingCommits) {
+  commitContext.abortSingleCommit(singleCommit);
+}
+  } else {
+getCommitOperations().abortAllSinglePendingCommits(attemptPath, 
commitContext, true);

Review Comment:
   looked at the hadoop stuff and yes, it does it in the same process too
   ```
   this is called from a task's process to clean 
   up a single task's output that can not yet been committed. This may be
   called multiple times for the same task, but for different task attempts.
   ```
   so nothing to worry about
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14837) Handle S3A "glacier" data

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816203#comment-17816203
 ] 

ASF GitHub Bot commented on HADOOP-14837:
-

steveloughran commented on code in PR #6407:
URL: https://github.com/apache/hadoop/pull/6407#discussion_r1484732749


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ObjectStorageClassFilter.java:
##
@@ -27,19 +27,25 @@
 
 /**
  * 
- * {@link S3ObjectStorageClassFilter} will filter the S3 files based on the 
{@code fs.s3a.glacier.read.restored.objects} configuration set in {@link 
S3AFileSystem}
+ * {@link S3ObjectStorageClassFilter} will filter the S3 files based on the
+ * {@code fs.s3a.glacier.read.restored.objects} configuration set in {@link 
S3AFileSystem}
  * The config can have 3 values:
- * {@code READ_ALL}: Retrieval of Glacier files will fail with 
InvalidObjectStateException: The operation is not valid for the object's 
storage class.
- * {@code SKIP_ALL_GLACIER}: If this value is set then this will ignore any S3 
Objects which are tagged with Glacier storage classes and retrieve the others.
- * {@code READ_RESTORED_GLACIER_OBJECTS}: If this value is set then restored 
status of the Glacier object will be checked, if restored the objects would be 
read like normal S3 objects else they will be ignored as the objects would not 
have been retrieved from the S3 Glacier.
+ * {@code READ_ALL}: Retrieval of Glacier files will fail with 
InvalidObjectStateException:
+ * The operation is not valid for the object's storage class.
+ * {@code SKIP_ALL_GLACIER}: If this value is set then this will ignore any S3 
Objects which are
+ * tagged with Glacier storage classes and retrieve the others.
+ * {@code READ_RESTORED_GLACIER_OBJECTS}: If this value is set then restored 
status of the Glacier
+ * object will be checked, if restored the objects would be read like normal 
S3 objects
+ * else they will be ignored as the objects would not have been retrieved from 
the S3 Glacier.
  * 
  */
 public enum S3ObjectStorageClassFilter {

Review Comment:
   I like this design you know: enum based mapping to closures.



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/list/ITestS3AReadRestoredGlacierObjects.java:
##
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.list;
+
+import static org.apache.hadoop.fs.s3a.Constants.READ_RESTORED_GLACIER_OBJECTS;
+import static org.apache.hadoop.fs.s3a.Constants.STORAGE_CLASS;
+import static org.apache.hadoop.fs.s3a.Constants.STORAGE_CLASS_DEEP_ARCHIVE;
+import static org.apache.hadoop.fs.s3a.Constants.STORAGE_CLASS_GLACIER;
+import static org.apache.hadoop.fs.s3a.S3ATestUtils.disableFilesystemCaching;
+import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides;
+import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.skipIfStorageClassTestsDisabled;
+import static 
org.apache.hadoop.fs.statistics.StoreStatisticNames.OBJECT_LIST_REQUEST;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.contract.s3a.S3AContract;
+import org.apache.hadoop.fs.s3a.AbstractS3ATestBase;
+import org.apache.hadoop.fs.s3a.S3ListRequest;
+import org.apache.hadoop.fs.s3a.S3ObjectStorageClassFilter;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.assertj.core.api.Assertions;
+import org.junit.Assume;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import software.amazon.awssdk.services.s3.S3Client;
+import software.amazon.awssdk.services.s3.model.RestoreObjectRequest;
+import software.amazon.awssdk.services.s3.model.S3Object;
+import software.amazon.awssdk.services.s3.model.Tier;
+
+/**
+ * Tests of various cases related to Glacier/Deep Archive Storage class.
+ */
+@RunWith(Parameterized.class)
+public class 

Re: [PR] HADOOP-14837 : Support Read Restored Glacier Objects [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on code in PR #6407:
URL: https://github.com/apache/hadoop/pull/6407#discussion_r1484732749


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ObjectStorageClassFilter.java:
##
@@ -27,19 +27,25 @@
 
 /**
  * 
- * {@link S3ObjectStorageClassFilter} will filter the S3 files based on the 
{@code fs.s3a.glacier.read.restored.objects} configuration set in {@link 
S3AFileSystem}
+ * {@link S3ObjectStorageClassFilter} will filter the S3 files based on the
+ * {@code fs.s3a.glacier.read.restored.objects} configuration set in {@link 
S3AFileSystem}
  * The config can have 3 values:
- * {@code READ_ALL}: Retrieval of Glacier files will fail with 
InvalidObjectStateException: The operation is not valid for the object's 
storage class.
- * {@code SKIP_ALL_GLACIER}: If this value is set then this will ignore any S3 
Objects which are tagged with Glacier storage classes and retrieve the others.
- * {@code READ_RESTORED_GLACIER_OBJECTS}: If this value is set then restored 
status of the Glacier object will be checked, if restored the objects would be 
read like normal S3 objects else they will be ignored as the objects would not 
have been retrieved from the S3 Glacier.
+ * {@code READ_ALL}: Retrieval of Glacier files will fail with 
InvalidObjectStateException:
+ * The operation is not valid for the object's storage class.
+ * {@code SKIP_ALL_GLACIER}: If this value is set then this will ignore any S3 
Objects which are
+ * tagged with Glacier storage classes and retrieve the others.
+ * {@code READ_RESTORED_GLACIER_OBJECTS}: If this value is set then restored 
status of the Glacier
+ * object will be checked, if restored the objects would be read like normal 
S3 objects
+ * else they will be ignored as the objects would not have been retrieved from 
the S3 Glacier.
  * 
  */
 public enum S3ObjectStorageClassFilter {

Review Comment:
   I like this design you know: enum based mapping to closures.



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/list/ITestS3AReadRestoredGlacierObjects.java:
##
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.list;
+
+import static org.apache.hadoop.fs.s3a.Constants.READ_RESTORED_GLACIER_OBJECTS;
+import static org.apache.hadoop.fs.s3a.Constants.STORAGE_CLASS;
+import static org.apache.hadoop.fs.s3a.Constants.STORAGE_CLASS_DEEP_ARCHIVE;
+import static org.apache.hadoop.fs.s3a.Constants.STORAGE_CLASS_GLACIER;
+import static org.apache.hadoop.fs.s3a.S3ATestUtils.disableFilesystemCaching;
+import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides;
+import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.skipIfStorageClassTestsDisabled;
+import static 
org.apache.hadoop.fs.statistics.StoreStatisticNames.OBJECT_LIST_REQUEST;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.contract.s3a.S3AContract;
+import org.apache.hadoop.fs.s3a.AbstractS3ATestBase;
+import org.apache.hadoop.fs.s3a.S3ListRequest;
+import org.apache.hadoop.fs.s3a.S3ObjectStorageClassFilter;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.assertj.core.api.Assertions;
+import org.junit.Assume;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import software.amazon.awssdk.services.s3.S3Client;
+import software.amazon.awssdk.services.s3.model.RestoreObjectRequest;
+import software.amazon.awssdk.services.s3.model.S3Object;
+import software.amazon.awssdk.services.s3.model.Tier;
+
+/**
+ * Tests of various cases related to Glacier/Deep Archive Storage class.
+ */
+@RunWith(Parameterized.class)
+public class ITestS3AReadRestoredGlacierObjects extends AbstractS3ATestBase {
+
+  enum Type { GLACIER_AND_DEEP_ARCHIVE, GLACIER }
+
+  @Parameterized.Parameters(name = "storage-class-{1}")
+  public static Collection data(){
+return Arrays.asList(new 

[jira] [Commented] (HADOOP-18938) S3A region logic to handle vpce and non standard endpoints

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816200#comment-17816200
 ] 

ASF GitHub Bot commented on HADOOP-18938:
-

hadoop-yetus commented on PR #6466:
URL: https://github.com/apache/hadoop/pull/6466#issuecomment-1936513805

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/5/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 14 new + 2 unchanged - 0 fixed 
= 16 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  0s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 122m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6466 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2dd56b6a7570 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 97360ba71f24df4cfc2d44f2f05c1bee0129a968 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/5/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/5/console |
   | versions | git=2.25.1 maven=3.6.3 

Re: [PR] HADOOP-18938. AWS SDK v2: Fix endpoint region parsing for vpc endpoints. [hadoop]

2024-02-09 Thread via GitHub


hadoop-yetus commented on PR #6466:
URL: https://github.com/apache/hadoop/pull/6466#issuecomment-1936513805

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/5/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 14 new + 2 unchanged - 0 fixed 
= 16 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  0s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 122m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6466 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2dd56b6a7570 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 97360ba71f24df4cfc2d44f2f05c1bee0129a968 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/5/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on 

Re: [PR] MAPREDUCE-7448. Inconsistent Behavior for FileOutputCommitter V1 to commit successfully many times [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on code in PR #6038:
URL: https://github.com/apache/hadoop/pull/6038#discussion_r1484731817


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java:
##
@@ -158,6 +158,11 @@ public FileOutputCommitter(Path outputPath,
 "output directory:" + skipCleanup + ", ignore cleanup failures: " +
 ignoreCleanupFailures);
 
+if (algorithmVersion == 1 && skipCleanup) {
+LOG.warn("Skip cleaning up when using FileOutputCommitter V1 can lead 
to unexpected behaviors. " +
+"For example, committing several times may be allowed 
falsely.");

Review Comment:
   ok, let's fail fast 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816196#comment-17816196
 ] 

ASF GitHub Bot commented on HADOOP-19073:
-

steveloughran commented on PR #6534:
URL: https://github.com/apache/hadoop/pull/6534#issuecomment-1936498628

   @xuzifu666 yes. 
   1. Which azure region did you run all the wasb tests against
   2. and what were your maven command line arguments used?
   3. did any tests fail?




> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19073 WASB: Fix connection leak in FolderRenamePending [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on PR #6534:
URL: https://github.com/apache/hadoop/pull/6534#issuecomment-1936498628

   @xuzifu666 yes. 
   1. Which azure region did you run all the wasb tests against
   2. and what were your maven command line arguments used?
   3. did any tests fail?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18679) Add API for bulk/paged object deletion

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816171#comment-17816171
 ] 

ASF GitHub Bot commented on HADOOP-18679:
-

hadoop-yetus commented on PR #6494:
URL: https://github.com/apache/hadoop/pull/6494#issuecomment-1936382465

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  8s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 33s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6494/3/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  38m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  16m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 32s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6494/3/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 39 unchanged - 0 fixed = 40 total (was 
39)  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  7s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6494/3/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  
hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
 with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  7s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m  9s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 259m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6494/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6494 |
   | Optional Tests | dupname 

Re: [PR] HADOOP-18679. Add API for bulk/paged object deletion [hadoop]

2024-02-09 Thread via GitHub


hadoop-yetus commented on PR #6494:
URL: https://github.com/apache/hadoop/pull/6494#issuecomment-1936382465

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  8s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 33s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6494/3/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  38m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  16m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 32s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6494/3/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 39 unchanged - 0 fixed = 40 total (was 
39)  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  7s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6494/3/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  
hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
 with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  7s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m  9s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 259m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6494/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6494 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d1aa5776a4a0 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | 

Re: [PR] HDFS-17376. Distcp creates Factor 1 replication file on target if Source is EC. [hadoop]

2024-02-09 Thread via GitHub


sodonnel merged PR #6540:
URL: https://github.com/apache/hadoop/pull/6540


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18938) S3A region logic to handle vpce and non standard endpoints

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816152#comment-17816152
 ] 

ASF GitHub Bot commented on HADOOP-18938:
-

hadoop-yetus commented on PR #6466:
URL: https://github.com/apache/hadoop/pull/6466#issuecomment-1936281849

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 14 new + 2 unchanged - 0 fixed 
= 16 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 57s |  |  hadoop-aws in the patch passed. 
 |
   | -1 :x: |  asflicense  |   0m 35s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/4/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 122m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6466 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e5feb12daa84 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5580fa2f0d366ae6f9ba0379666f240be28dcb1e |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/4/testReport/ |
   | Max. process+thread count | 635 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 

Re: [PR] HADOOP-18938. AWS SDK v2: Fix endpoint region parsing for vpc endpoints. [hadoop]

2024-02-09 Thread via GitHub


hadoop-yetus commented on PR #6466:
URL: https://github.com/apache/hadoop/pull/6466#issuecomment-1936281849

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 14 new + 2 unchanged - 0 fixed 
= 16 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 57s |  |  hadoop-aws in the patch passed. 
 |
   | -1 :x: |  asflicense  |   0m 35s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/4/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 122m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6466 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e5feb12daa84 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5580fa2f0d366ae6f9ba0379666f240be28dcb1e |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/4/testReport/ |
   | Max. process+thread count | 635 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6466/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
 

Re: [PR] HDFS-17376. Distcp creates Factor 1 replication file on target if Source is EC. [hadoop]

2024-02-09 Thread via GitHub


hadoop-yetus commented on PR #6540:
URL: https://github.com/apache/hadoop/pull/6540#issuecomment-1936233134

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m  9s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 142m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6540/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6540 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2d8d5dde3f11 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e343c4c3a6e7585f574caa1f63a88e5d8275fa35 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6540/2/testReport/ |
   | Max. process+thread count | 699 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6540/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HADOOP-18938) S3A region logic to handle vpce and non standard endpoints

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816110#comment-17816110
 ] 

ASF GitHub Bot commented on HADOOP-18938:
-

shintaroonuma commented on PR #6466:
URL: https://github.com/apache/hadoop/pull/6466#issuecomment-1936098571

   Thanks for the comments, rebased and updated the pr.




> S3A region logic to handle vpce and non standard endpoints 
> ---
>
> Key: HADOOP-18938
> URL: https://issues.apache.org/jira/browse/HADOOP-18938
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> For non standard endpoints such as VPCE the region parsing added in 
> HADOOP-18908 doesn't work. This is expected as that logic is only meant to be 
> used for standard endpoints. 
> If you are using a non-standard endpoint, check if a region is also provided, 
> else fail fast. 
> Also update documentation to explain to region and endpoint behaviour with 
> SDK V2. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18938. AWS SDK v2: Fix endpoint region parsing for vpc endpoints. [hadoop]

2024-02-09 Thread via GitHub


shintaroonuma commented on PR #6466:
URL: https://github.com/apache/hadoop/pull/6466#issuecomment-1936098571

   Thanks for the comments, rebased and updated the pr.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18938) S3A region logic to handle vpce and non standard endpoints

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816108#comment-17816108
 ] 

ASF GitHub Bot commented on HADOOP-18938:
-

shintaroonuma commented on code in PR #6466:
URL: https://github.com/apache/hadoop/pull/6466#discussion_r148232


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -361,7 +366,15 @@ private static URI getS3Endpoint(String endpoint, final 
Configuration conf) {
*/
   private static Region getS3RegionFromEndpoint(String endpoint) {

Review Comment:
   Added some unit tests on endpoint parsing.





> S3A region logic to handle vpce and non standard endpoints 
> ---
>
> Key: HADOOP-18938
> URL: https://issues.apache.org/jira/browse/HADOOP-18938
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> For non standard endpoints such as VPCE the region parsing added in 
> HADOOP-18908 doesn't work. This is expected as that logic is only meant to be 
> used for standard endpoints. 
> If you are using a non-standard endpoint, check if a region is also provided, 
> else fail fast. 
> Also update documentation to explain to region and endpoint behaviour with 
> SDK V2. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18938) S3A region logic to handle vpce and non standard endpoints

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816109#comment-17816109
 ] 

ASF GitHub Bot commented on HADOOP-18938:
-

shintaroonuma commented on code in PR #6466:
URL: https://github.com/apache/hadoop/pull/6466#discussion_r148513


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -82,6 +84,9 @@ public class DefaultS3ClientFactory extends Configured
 
   private static final String S3_SERVICE_NAME = "s3";
 
+  private static final Pattern VPC_ENDPOINT_PATTERN =
+  
Pattern.compile("^(?:.+\\.)?([a-z0-9-]+)\\.vpce\\.amazonaws\\.(?:com|com\\.cn)$");

Review Comment:
   Govcloud is amazonaws.com aswell.





> S3A region logic to handle vpce and non standard endpoints 
> ---
>
> Key: HADOOP-18938
> URL: https://issues.apache.org/jira/browse/HADOOP-18938
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> For non standard endpoints such as VPCE the region parsing added in 
> HADOOP-18908 doesn't work. This is expected as that logic is only meant to be 
> used for standard endpoints. 
> If you are using a non-standard endpoint, check if a region is also provided, 
> else fail fast. 
> Also update documentation to explain to region and endpoint behaviour with 
> SDK V2. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18938. AWS SDK v2: Fix endpoint region parsing for vpc endpoints. [hadoop]

2024-02-09 Thread via GitHub


shintaroonuma commented on code in PR #6466:
URL: https://github.com/apache/hadoop/pull/6466#discussion_r148513


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -82,6 +84,9 @@ public class DefaultS3ClientFactory extends Configured
 
   private static final String S3_SERVICE_NAME = "s3";
 
+  private static final Pattern VPC_ENDPOINT_PATTERN =
+  
Pattern.compile("^(?:.+\\.)?([a-z0-9-]+)\\.vpce\\.amazonaws\\.(?:com|com\\.cn)$");

Review Comment:
   Govcloud is amazonaws.com aswell.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18938. AWS SDK v2: Fix endpoint region parsing for vpc endpoints. [hadoop]

2024-02-09 Thread via GitHub


shintaroonuma commented on code in PR #6466:
URL: https://github.com/apache/hadoop/pull/6466#discussion_r148232


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -361,7 +366,15 @@ private static URI getS3Endpoint(String endpoint, final 
Configuration conf) {
*/
   private static Region getS3RegionFromEndpoint(String endpoint) {

Review Comment:
   Added some unit tests on endpoint parsing.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-02-09 Thread via GitHub


ahmarsuhail commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1936082879

   @adnanhemani re test failures, just updating the core-site won't be enough 
for some of them, you'll also need the code changes in Steve's PR 
https://github.com/apache/hadoop/pull/6515 , that should get merged soon so 
then you can rebase and retest.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19057) S3 public test bucket landsat-pds unreadable -needs replacement

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816105#comment-17816105
 ] 

ASF GitHub Bot commented on HADOOP-19057:
-

ahmarsuhail commented on code in PR #6515:
URL: https://github.com/apache/hadoop/pull/6515#discussion_r1484407197


##
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/connecting.md:
##
@@ -289,9 +289,8 @@ for buckets in the central and EU/Ireland endpoints.
 
 ```xml
 
-  fs.s3a.bucket.landsat-pds.endpoint.region
+  fs.s3a.bucket.us2w-dataset.endpoint.region

Review Comment:
   nit: typo, usw2-dataset (or let's just be clearer with `us-west-2-dataset`





> S3 public test bucket landsat-pds unreadable -needs replacement
> ---
>
> Key: HADOOP-19057
> URL: https://issues.apache.org/jira/browse/HADOOP-19057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0, 3.2.4, 3.3.9, 3.3.6, 3.5.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>  Labels: pull-request-available
>
> The s3 test bucket used in hadoop-aws tests of S3 select and large file reads 
> is no longer publicly accessible
> {code}
> java.nio.file.AccessDeniedException: landsat-pds: getBucketMetadata() on 
> landsat-pds: software.amazon.awssdk.services.s3.model.S3Exception: null 
> (Service: S3, Status Code: 403, Request ID: 06QNYQ9GND5STQ2S, Extended 
> Request ID: 
> O+u2Y1MrCQuuSYGKRAWHj/5LcDLuaFS8owNuXXWSJ0zFXYfuCaTVLEP351S/umti558eKlUqV6U=):null
> {code}
> * Because HADOOP-18830 has cut s3 select, all we need in 3.4.1+ is a large 
> file for some reading tests
> * changing the default value disables s3 select tests on older releases
> * if fs.s3a.scale.test.csvfile is set to " " then other tests which need it 
> will be skipped
> Proposed
> * we locate a new large file under the (requester pays) s3a://usgs-landsat/ 
> bucket . All releases with HADOOP-18168 can use this
> * update 3.4.1 source to use this; document it
> * do something similar for 3.3.9 + maybe even cut s3 select there too.
> * document how to use it on older releases with requester-pays support
> * document how to completely disable it on older releases.
> h2. How to fix (most) landsat test failures on older releases
> add this to your auth-keys.xml file. Expect some failures in a few tests 
> with-hardcoded references to the bucket (assumed role delegation tokens)
> {code}
>   
> fs.s3a.scale.test.csvfile
> s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz
> file used in scale tests
>   
>   
> fs.s3a.bucket.noaa-cors-pds.endpoint.region
> us-east-1
>   
>   
> fs.s3a.bucket.noaa-isd-pds.multipart.purge
> false
> Don't try to purge uploads in the read-only bucket, as
> it will only create log noise.
>   
>   
> fs.s3a.bucket.noaa-isd-pds.probe
> 0
> Let's postpone existence checks to the first IO operation 
> 
>   
>   
> fs.s3a.bucket.noaa-isd-pds.audit.add.referrer.header
> false
> Do not add the referrer header
>   
>   
> fs.s3a.bucket.noaa-isd-pds.prefetch.block.size
> 128k
> Use a small prefetch size so tests fetch multiple 
> blocks
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19057. S3A: Landsat bucket used in tests no longer accessible [hadoop]

2024-02-09 Thread via GitHub


ahmarsuhail commented on code in PR #6515:
URL: https://github.com/apache/hadoop/pull/6515#discussion_r1484407197


##
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/connecting.md:
##
@@ -289,9 +289,8 @@ for buckets in the central and EU/Ireland endpoints.
 
 ```xml
 
-  fs.s3a.bucket.landsat-pds.endpoint.region
+  fs.s3a.bucket.us2w-dataset.endpoint.region

Review Comment:
   nit: typo, usw2-dataset (or let's just be clearer with `us-west-2-dataset`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816093#comment-17816093
 ] 

ASF GitHub Bot commented on HADOOP-19073:
-

xuzifu666 commented on PR #6534:
URL: https://github.com/apache/hadoop/pull/6534#issuecomment-1936056092

   > moved to hadoop; azure component. 
   > 
   > now, the bad news: as it goes near a cloud store, you *have* to run the 
entire hadoop-azure tests for the wasb component, and tell us which region you 
tested against.
   > 
   > https://hadoop.apache.org/docs/stable/hadoop-azure/testing_azure.html
   > 
   > sorry, but yetus doesn't have any credentials. It doesn't take long and is 
designed to clean up afterwards
   
   OK,anything else me to do for the pull request?@steveloughran 




> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19073 WASB: Fix connection leak in FolderRenamePending [hadoop]

2024-02-09 Thread via GitHub


xuzifu666 commented on PR #6534:
URL: https://github.com/apache/hadoop/pull/6534#issuecomment-1936056092

   > moved to hadoop; azure component. 
   > 
   > now, the bad news: as it goes near a cloud store, you *have* to run the 
entire hadoop-azure tests for the wasb component, and tell us which region you 
tested against.
   > 
   > https://hadoop.apache.org/docs/stable/hadoop-azure/testing_azure.html
   > 
   > sorry, but yetus doesn't have any credentials. It doesn't take long and is 
designed to clean up afterwards
   
   OK,anything else me to do for the pull request?@steveloughran 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816088#comment-17816088
 ] 

ASF GitHub Bot commented on HADOOP-19073:
-

steveloughran commented on PR #6534:
URL: https://github.com/apache/hadoop/pull/6534#issuecomment-1936045642

   moved to hadoop; azure component. 
   
   now, the bad news: as it goes near a cloud store, you *have* to run the 
entire hadoop-azure tests for the wasb component, and tell us which region you 
tested against.
   
   https://hadoop.apache.org/docs/stable/hadoop-azure/testing_azure.html
   
   sorry, but yetus doesn't have any credentials. It doesn't take long and is 
designed to clean up afterwards




> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19073 WASB: Fix connection leak in FolderRenamePending [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on PR #6534:
URL: https://github.com/apache/hadoop/pull/6534#issuecomment-1936045642

   moved to hadoop; azure component. 
   
   now, the bad news: as it goes near a cloud store, you *have* to run the 
entire hadoop-azure tests for the wasb component, and tell us which region you 
tested against.
   
   https://hadoop.apache.org/docs/stable/hadoop-azure/testing_azure.html
   
   sorry, but yetus doesn't have any credentials. It doesn't take long and is 
designed to clean up afterwards


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-02-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19073:

Summary: WASB: Fix connection leak in FolderRenamePending  (was: Fix 
connection leak in FolderRenamePending)

> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-19073) Fix connection leak in FolderRenamePending

2024-02-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-19073:
---

  Key: HADOOP-19073  (was: HDFS-17373)
Affects Version/s: 3.3.6
   (was: 3.3.6)
 Assignee: (was: xy)
   Issue Type: Bug  (was: Improvement)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Fix connection leak in FolderRenamePending
> --
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.6
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18980) S3A credential provider remapping: make extensible

2024-02-09 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816085#comment-17816085
 ] 

Steve Loughran commented on HADOOP-18980:
-

as commented in the backport pr, there are some tests of the k=v splitting we 
need, maybe some new policy there


# duplicate entries key=val1, key=val2. should the parser fail or just return 
the latest (as is done today)
# empty ,,
# =val
# key=
 the last two should fail, other ones, well, I see no problem with them passing.

> S3A credential provider remapping: make extensible
> --
>
> Key: HADOOP-18980
> URL: https://issues.apache.org/jira/browse/HADOOP-18980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.1
>
>
> s3afs will now remap the common com.amazonaws credential providers to 
> equivalents in the v2 sdk or in hadoop-aws
> We could do the same for third party credential providers by taking a 
> key=value list in a configuration property and adding to the map. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19073) Fix connection leak in FolderRenamePending

2024-02-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19073:

Component/s: fs/azure

> Fix connection leak in FolderRenamePending
> --
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18980) S3A credential provider remapping: make extensible

2024-02-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18980:

Fix Version/s: 3.4.1

> S3A credential provider remapping: make extensible
> --
>
> Key: HADOOP-18980
> URL: https://issues.apache.org/jira/browse/HADOOP-18980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.1
>
>
> s3afs will now remap the common com.amazonaws credential providers to 
> equivalents in the v2 sdk or in hadoop-aws
> We could do the same for third party credential providers by taking a 
> key=value list in a configuration property and adding to the map. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18980. S3A credential provider remapping: make extensible (#6406) [hadoop]

2024-02-09 Thread via GitHub


steveloughran merged PR #6525:
URL: https://github.com/apache/hadoop/pull/6525


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18843. Guava version 32.0.1 bump to fix CVE-2023-2976 [hadoop-thirdparty]

2024-02-09 Thread via GitHub


steveloughran commented on PR #23:
URL: https://github.com/apache/hadoop-thirdparty/pull/23#issuecomment-1936013649

   new release is out; 3.4.0 RC2 will ship it!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19059) S3A: update AWS SDK to 2.23.19 to support S3 Access Grants

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816079#comment-17816079
 ] 

ASF GitHub Bot commented on HADOOP-19059:
-

steveloughran commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1936011003

   this has been obsoleted by #6538 which puts it up to 2.23.19, though it 
looks like there are more to come. Can I close it?




> S3A: update AWS SDK to 2.23.19 to support S3 Access Grants
> --
>
> Key: HADOOP-19059
> URL: https://issues.apache.org/jira/browse/HADOOP-19059
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.1
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In order to support S3 Access 
> Grants(https://aws.amazon.com/s3/features/access-grants/) in S3A, we need to 
> update AWS SDK in hadooop package.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19059. Update AWS SDK to v2.23.7 [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on PR #6506:
URL: https://github.com/apache/hadoop/pull/6506#issuecomment-1936011003

   this has been obsoleted by #6538 which puts it up to 2.23.19, though it 
looks like there are more to come. Can I close it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816078#comment-17816078
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

steveloughran commented on PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1936006860

   @adnanhemani thanks; without that change we'd have problems with the PR, as 
in "you get to support it all through reflection" the way we have to do with 
wildfly/openssl binding (NetworkBinding) and more. 




> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on PR #6507:
URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1936006860

   @adnanhemani thanks; without that change we'd have problems with the PR, as 
in "you get to support it all through reflection" the way we have to do with 
wildfly/openssl binding (NetworkBinding) and more. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19057) S3 public test bucket landsat-pds unreadable -needs replacement

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816054#comment-17816054
 ] 

ASF GitHub Bot commented on HADOOP-19057:
-

steveloughran commented on PR #6515:
URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1935903530

   need urgent reviews/tests of this from anyone who can, just to fix the 
widespread test failures
   
   @ahmarsuhail @mukund-thakur @HarshitGupta11 @virajjasani @sunchao 




> S3 public test bucket landsat-pds unreadable -needs replacement
> ---
>
> Key: HADOOP-19057
> URL: https://issues.apache.org/jira/browse/HADOOP-19057
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0, 3.2.4, 3.3.9, 3.3.6, 3.5.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>  Labels: pull-request-available
>
> The s3 test bucket used in hadoop-aws tests of S3 select and large file reads 
> is no longer publicly accessible
> {code}
> java.nio.file.AccessDeniedException: landsat-pds: getBucketMetadata() on 
> landsat-pds: software.amazon.awssdk.services.s3.model.S3Exception: null 
> (Service: S3, Status Code: 403, Request ID: 06QNYQ9GND5STQ2S, Extended 
> Request ID: 
> O+u2Y1MrCQuuSYGKRAWHj/5LcDLuaFS8owNuXXWSJ0zFXYfuCaTVLEP351S/umti558eKlUqV6U=):null
> {code}
> * Because HADOOP-18830 has cut s3 select, all we need in 3.4.1+ is a large 
> file for some reading tests
> * changing the default value disables s3 select tests on older releases
> * if fs.s3a.scale.test.csvfile is set to " " then other tests which need it 
> will be skipped
> Proposed
> * we locate a new large file under the (requester pays) s3a://usgs-landsat/ 
> bucket . All releases with HADOOP-18168 can use this
> * update 3.4.1 source to use this; document it
> * do something similar for 3.3.9 + maybe even cut s3 select there too.
> * document how to use it on older releases with requester-pays support
> * document how to completely disable it on older releases.
> h2. How to fix (most) landsat test failures on older releases
> add this to your auth-keys.xml file. Expect some failures in a few tests 
> with-hardcoded references to the bucket (assumed role delegation tokens)
> {code}
>   
> fs.s3a.scale.test.csvfile
> s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz
> file used in scale tests
>   
>   
> fs.s3a.bucket.noaa-cors-pds.endpoint.region
> us-east-1
>   
>   
> fs.s3a.bucket.noaa-isd-pds.multipart.purge
> false
> Don't try to purge uploads in the read-only bucket, as
> it will only create log noise.
>   
>   
> fs.s3a.bucket.noaa-isd-pds.probe
> 0
> Let's postpone existence checks to the first IO operation 
> 
>   
>   
> fs.s3a.bucket.noaa-isd-pds.audit.add.referrer.header
> false
> Do not add the referrer header
>   
>   
> fs.s3a.bucket.noaa-isd-pds.prefetch.block.size
> 128k
> Use a small prefetch size so tests fetch multiple 
> blocks
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19057. S3A: Landsat bucket used in tests no longer accessible [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on PR #6515:
URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1935903530

   need urgent reviews/tests of this from anyone who can, just to fix the 
widespread test failures
   
   @ahmarsuhail @mukund-thakur @HarshitGupta11 @virajjasani @sunchao 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19069) Use hadoop-thirdparty 1.2.0

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816053#comment-17816053
 ] 

ASF GitHub Bot commented on HADOOP-19069:
-

steveloughran merged PR #6541:
URL: https://github.com/apache/hadoop/pull/6541




> Use hadoop-thirdparty 1.2.0
> ---
>
> Key: HADOOP-19069
> URL: https://issues.apache.org/jira/browse/HADOOP-19069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: hadoop-thirdparty
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19069) Use hadoop-thirdparty 1.2.0

2024-02-09 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19069:

Fix Version/s: 3.3.9

> Use hadoop-thirdparty 1.2.0
> ---
>
> Key: HADOOP-19069
> URL: https://issues.apache.org/jira/browse/HADOOP-19069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: hadoop-thirdparty
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9, 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19069. Use hadoop-thirdparty 1.2.0. (#6533) [hadoop]

2024-02-09 Thread via GitHub


steveloughran merged PR #6541:
URL: https://github.com/apache/hadoop/pull/6541


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19069) Use hadoop-thirdparty 1.2.0

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816052#comment-17816052
 ] 

ASF GitHub Bot commented on HADOOP-19069:
-

steveloughran commented on PR #6541:
URL: https://github.com/apache/hadoop/pull/6541#issuecomment-1935898164

   reviewing tests, only hadoop-common tests were executed, no failures.
   
   going to +1 this.




> Use hadoop-thirdparty 1.2.0
> ---
>
> Key: HADOOP-19069
> URL: https://issues.apache.org/jira/browse/HADOOP-19069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: hadoop-thirdparty
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19069. Use hadoop-thirdparty 1.2.0. (#6533) [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on PR #6541:
URL: https://github.com/apache/hadoop/pull/6541#issuecomment-1935898164

   reviewing tests, only hadoop-common tests were executed, no failures.
   
   going to +1 this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19069) Use hadoop-thirdparty 1.2.0

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816051#comment-17816051
 ] 

ASF GitHub Bot commented on HADOOP-19069:
-

steveloughran commented on PR #6541:
URL: https://github.com/apache/hadoop/pull/6541#issuecomment-1935893458

   Merging this; all the warnings are just warnings of a future -in generated 
code, pretty meaningless




> Use hadoop-thirdparty 1.2.0
> ---
>
> Key: HADOOP-19069
> URL: https://issues.apache.org/jira/browse/HADOOP-19069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: hadoop-thirdparty
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19069. Use hadoop-thirdparty 1.2.0. (#6533) [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on PR #6541:
URL: https://github.com/apache/hadoop/pull/6541#issuecomment-1935893458

   Merging this; all the warnings are just warnings of a future -in generated 
code, pretty meaningless


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19069) Use hadoop-thirdparty 1.2.0

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816050#comment-17816050
 ] 

ASF GitHub Bot commented on HADOOP-19069:
-

steveloughran commented on PR #6541:
URL: https://github.com/apache/hadoop/pull/6541#issuecomment-1935888966

   There are lot of deprecation warnings here; mostly about PARSER
   ```
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:1300:90:[deprecation]
 PARSER in CipherOptionProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:9531:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:11986:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:16419:82:[deprecation]
 PARSER in TokenProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:13582:96:[deprecation]
 PARSER in BlockStoragePolicyProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:19109:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:21826:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:21839:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:25642:90:[deprecation]
 PARSER in LocatedBlockProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:37401:101:[deprecation]
 PARSER in BatchedDirectoryListingProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:50305:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:51727:91:[deprecation]
 PARSER in StorageReportProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:56123:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:100318:88:[deprecation]
 PARSER in DatanodeIDProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/HdfsProtos.java:21236:82:[deprecation]
 PARSER in TokenProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ErasureCodingProtos.java:2062:97:[deprecation]
 PARSER in ErasureCodingPolicyProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ErasureCodingProtos.java:5328:97:[deprecation]
 PARSER in ErasureCodingPolicyProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ErasureCodingProtos.java:6124:108:[deprecation]
 PARSER in AddErasureCodingPolicyResponseProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/InotifyProtos.java:8418:84:[deprecation]
 PARSER in XAttrProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:22849:85:[deprecation]
 PARSER in BlockProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:30265:97:[deprecation]
 PARSER in ErasureCodingPolicyProto has been deprecated
   

Re: [PR] HADOOP-19069. Use hadoop-thirdparty 1.2.0. (#6533) [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on PR #6541:
URL: https://github.com/apache/hadoop/pull/6541#issuecomment-1935888966

   There are lot of deprecation warnings here; mostly about PARSER
   ```
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:1300:90:[deprecation]
 PARSER in CipherOptionProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:9531:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:11986:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DataTransferProtos.java:16419:82:[deprecation]
 PARSER in TokenProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:13582:96:[deprecation]
 PARSER in BlockStoragePolicyProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:19109:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:21826:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:21839:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:25642:90:[deprecation]
 PARSER in LocatedBlockProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:37401:101:[deprecation]
 PARSER in BatchedDirectoryListingProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:50305:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:51727:91:[deprecation]
 PARSER in StorageReportProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:56123:90:[deprecation]
 PARSER in DatanodeInfoProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ClientNamenodeProtocolProtos.java:100318:88:[deprecation]
 PARSER in DatanodeIDProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/HdfsProtos.java:21236:82:[deprecation]
 PARSER in TokenProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ErasureCodingProtos.java:2062:97:[deprecation]
 PARSER in ErasureCodingPolicyProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ErasureCodingProtos.java:5328:97:[deprecation]
 PARSER in ErasureCodingPolicyProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/ErasureCodingProtos.java:6124:108:[deprecation]
 PARSER in AddErasureCodingPolicyResponseProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs-client/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/InotifyProtos.java:8418:84:[deprecation]
 PARSER in XAttrProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:22849:85:[deprecation]
 PARSER in BlockProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:30265:97:[deprecation]
 PARSER in ErasureCodingPolicyProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/EditLogProtos.java:1566:84:[deprecation]
 PARSER in XAttrProto has been deprecated
   
hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/protocol/proto/DatanodeProtocolProtos.java:5686:83:[deprecation]
 PARSER in BlockProto has been 

[jira] [Commented] (HADOOP-18679) Add API for bulk/paged object deletion

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816046#comment-17816046
 ] 

ASF GitHub Bot commented on HADOOP-18679:
-

steveloughran commented on PR #6494:
URL: https://github.com/apache/hadoop/pull/6494#issuecomment-1935875347

   +add a FileUtils method to assist deletion here, with 
`FileUtils.bulkDeletePageSize(path) -> int` and `FileUtils.bulkDelete(path, 
List) -> List; each will create a bulk delete object, execute the 
operation/probe and then close. 
   
   why so?
   
   Makes reflection binding straighforward: no new types; just two methods.




> Add API for bulk/paged object deletion
> --
>
> Key: HADOOP-18679
> URL: https://issues.apache.org/jira/browse/HADOOP-18679
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> iceberg and hbase could benefit from being able to give a list of individual 
> files to delete -files which may be scattered round the bucket for better 
> read peformance. 
> Add some new optional interface for an object store which allows a caller to 
> submit a list of paths to files to delete, where
> the expectation is
> * if a path is a file: delete
> * if a path is a dir, outcome undefined
> For s3 that'd let us build these into DeleteRequest objects, and submit, 
> without any probes first.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18679. Add API for bulk/paged object deletion [hadoop]

2024-02-09 Thread via GitHub


steveloughran commented on PR #6494:
URL: https://github.com/apache/hadoop/pull/6494#issuecomment-1935875347

   +add a FileUtils method to assist deletion here, with 
`FileUtils.bulkDeletePageSize(path) -> int` and `FileUtils.bulkDelete(path, 
List) -> List; each will create a bulk delete object, execute the 
operation/probe and then close. 
   
   why so?
   
   Makes reflection binding straighforward: no new types; just two methods.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816031#comment-17816031
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

ahmarsuhail commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484099683


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -401,4 +411,19 @@ private static Region getS3RegionFromEndpoint(final String 
endpoint,
 return Region.of(AWS_S3_DEFAULT_REGION);
   }
 
+  private static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+if (!conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false)){
+  LOG_S3AG_ENABLED.debug("S3 Access Grants plugin is not enabled.");
+  return;
+}
+
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is enabled.");
+boolean isFallbackEnabled = 
conf.getBoolean(AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED, false);
+S3AccessGrantsPlugin accessGrantsPlugin =
+
S3AccessGrantsPlugin.builder().enableFallback(isFallbackEnabled).build();
+builder.addPlugin(accessGrantsPlugin);
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is added to S3 client with 
fallback: {}", isFallbackEnabled);

Review Comment:
   this won't log, cause you have already used the only once on line 421. Cut 
the log on 421, and just keep this one. 
   
   Update text to "S3Access Grants plugin is enabled with IAM fallback set to 
{} "



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;
+import software.amazon.awssdk.services.s3.S3BaseClientBuilder;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase {
+/**
+ * This credential provider will be attached to any client
+ * that has been configured with the S3 Access Grants plugin.
+ * {@link 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin}.
+ */
+public static final String 
S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS =
+S3AccessGrantsIdentityProvider.class.getName();
+
+@Test
+public void testS3AccessGrantsEnabled() throws IOException, 
URISyntaxException {
+// Feature is explicitly enabled
+AwsClient s3AsyncClient = getAwsClient(createConfig(true), true);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncClient));
+
+AwsClient s3Client = getAwsClient(createConfig(true), false);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3Client));
+}
+
+@Test
+public void testS3AccessGrantsDisabled() throws IOException, 
URISyntaxException {
+// Disabled by default
+AwsClient s3AsyncDefaultClient = getAwsClient(new Configuration(), 
true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncDefaultClient));
+
+AwsClient s3DefaultClient = getAwsClient(new Configuration(), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3DefaultClient));
+
+// Disabled if explicitly set
+AwsClient s3AsyncExplicitlyDisabledClient = 

Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-02-09 Thread via GitHub


ahmarsuhail commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1484099683


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -401,4 +411,19 @@ private static Region getS3RegionFromEndpoint(final String 
endpoint,
 return Region.of(AWS_S3_DEFAULT_REGION);
   }
 
+  private static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+if (!conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false)){
+  LOG_S3AG_ENABLED.debug("S3 Access Grants plugin is not enabled.");
+  return;
+}
+
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is enabled.");
+boolean isFallbackEnabled = 
conf.getBoolean(AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED, false);
+S3AccessGrantsPlugin accessGrantsPlugin =
+
S3AccessGrantsPlugin.builder().enableFallback(isFallbackEnabled).build();
+builder.addPlugin(accessGrantsPlugin);
+LOG_S3AG_ENABLED.info("S3 Access Grants plugin is added to S3 client with 
fallback: {}", isFallbackEnabled);

Review Comment:
   this won't log, cause you have already used the only once on line 421. Cut 
the log on 421, and just keep this one. 
   
   Update text to "S3Access Grants plugin is enabled with IAM fallback set to 
{} "



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;
+import software.amazon.awssdk.services.s3.S3BaseClientBuilder;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase {
+/**
+ * This credential provider will be attached to any client
+ * that has been configured with the S3 Access Grants plugin.
+ * {@link 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin}.
+ */
+public static final String 
S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS =
+S3AccessGrantsIdentityProvider.class.getName();
+
+@Test
+public void testS3AccessGrantsEnabled() throws IOException, 
URISyntaxException {
+// Feature is explicitly enabled
+AwsClient s3AsyncClient = getAwsClient(createConfig(true), true);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncClient));
+
+AwsClient s3Client = getAwsClient(createConfig(true), false);
+Assert.assertEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3Client));
+}
+
+@Test
+public void testS3AccessGrantsDisabled() throws IOException, 
URISyntaxException {
+// Disabled by default
+AwsClient s3AsyncDefaultClient = getAwsClient(new Configuration(), 
true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncDefaultClient));
+
+AwsClient s3DefaultClient = getAwsClient(new Configuration(), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3DefaultClient));
+
+// Disabled if explicitly set
+AwsClient s3AsyncExplicitlyDisabledClient = 
getAwsClient(createConfig(false), true);
+Assert.assertNotEquals(
+S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS,
+getCredentialProviderName(s3AsyncExplicitlyDisabledClient));
+
+AwsClient s3ExplicitlyDisabledClient 

Re: [PR] HDFS-17376. Distcp creates Factor 1 replication file on target if Source is EC. [hadoop]

2024-02-09 Thread via GitHub


sodonnel commented on PR #6540:
URL: https://github.com/apache/hadoop/pull/6540#issuecomment-1935695635

   @sadanand48 The change LGTM, but there are two checkstyle warnings in the 
test changes - could you fix them please?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18342) Upgrade to Avro 1.11.1

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815998#comment-17815998
 ] 

ASF GitHub Bot commented on HADOOP-18342:
-

hadoop-yetus commented on PR #4854:
URL: https://github.com/apache/hadoop/pull/4854#issuecomment-1935665039

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   0m 39s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 12s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. 
 |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/buildtool-branch-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/buildtool-branch-checkstyle-root.txt)
 |  The patch fails to run checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 23s | 
[/branch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-mvnsite-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-javadoc-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. 
 |
   | -1 :x: |  spotbugs  |   0m 20s | 
[/branch-spotbugs-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-client-modules_hadoop-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-hadoop-client-modules_hadoop-client.txt)
 |  hadoop-client in trunk failed.  |
   | -1 :x: |  spotbugs  |   0m 23s | 
[/branch-spotbugs-hadoop-client-modules_hadoop-client-minicluster.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-hadoop-client-modules_hadoop-client-minicluster.txt)
 |  hadoop-client-minicluster in trunk failed.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | -1 :x: |  spotbugs  |   2m 58s | 
[/branch-spotbugs-hadoop-mapreduce-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-hadoop-mapreduce-project.txt)
 |  hadoop-mapreduce-project in trunk failed.  |
   | -1 :x: |  spotbugs  |   0m 23s | 
[/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.txt)
 |  hadoop-mapreduce-client in trunk failed.  |
 

Re: [PR] HADOOP-18342: shaded avro jar [hadoop]

2024-02-09 Thread via GitHub


hadoop-yetus commented on PR #4854:
URL: https://github.com/apache/hadoop/pull/4854#issuecomment-1935665039

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   0m 39s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 12s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. 
 |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/buildtool-branch-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/buildtool-branch-checkstyle-root.txt)
 |  The patch fails to run checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 23s | 
[/branch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-mvnsite-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-javadoc-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. 
 |
   | -1 :x: |  spotbugs  |   0m 20s | 
[/branch-spotbugs-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-client-modules_hadoop-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-hadoop-client-modules_hadoop-client.txt)
 |  hadoop-client in trunk failed.  |
   | -1 :x: |  spotbugs  |   0m 23s | 
[/branch-spotbugs-hadoop-client-modules_hadoop-client-minicluster.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-hadoop-client-modules_hadoop-client-minicluster.txt)
 |  hadoop-client-minicluster in trunk failed.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | -1 :x: |  spotbugs  |   2m 58s | 
[/branch-spotbugs-hadoop-mapreduce-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-hadoop-mapreduce-project.txt)
 |  hadoop-mapreduce-project in trunk failed.  |
   | -1 :x: |  spotbugs  |   0m 23s | 
[/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4854/5/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client.txt)
 |  hadoop-mapreduce-client in trunk failed.  |
   | -1 :x: |  spotbugs  |   0m 22s | 

Re: [PR] [NOT FOR MERGE] test with shaded protobuf-java 3.21 (snapshot) [hadoop]

2024-02-09 Thread via GitHub


pjfanning closed pull request #6350: [NOT FOR MERGE] test with shaded 
protobuf-java 3.21 (snapshot)
URL: https://github.com/apache/hadoop/pull/6350


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-02-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815984#comment-17815984
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

hadoop-yetus commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1935575917

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/1/artifact/out/blanks-eol.txt)
 |  The patch has 7 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 36 new + 2 unchanged - 0 fixed 
= 38 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  0s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6544 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 07da37719acc 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 33b3350e7bbc1fd328ea085991bb51c3af36f3ca |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 

Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-02-09 Thread via GitHub


hadoop-yetus commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1935575917

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/1/artifact/out/blanks-eol.txt)
 |  The patch has 7 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 36 new + 2 unchanged - 0 fixed 
= 38 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  0s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6544 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 07da37719acc 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 33b3350e7bbc1fd328ea085991bb51c3af36f3ca |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/1/testReport/ |
   | Max. process+thread count | 529 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/1/console