[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836430#comment-17836430
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2051090442

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  35m 14s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/22/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 28 new + 17 unchanged - 0 
fixed = 45 total (was 17)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  7s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/22/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  34m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 29s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 130m 35s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.fileStatusInformationPresent;
 locked 66% of time  Unsynchronized access at AbfsInputStream.java:66% of time  
Unsynchronized access at AbfsInputStream.java:[line 691] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/22/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5295c9c60349 5.15.0-94-generi

Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2051090442

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  35m 14s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/22/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 28 new + 17 unchanged - 0 
fixed = 45 total (was 17)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  7s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/22/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  34m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 29s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 130m 35s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.fileStatusInformationPresent;
 locked 66% of time  Unsynchronized access at AbfsInputStream.java:66% of time  
Unsynchronized access at AbfsInputStream.java:[line 691] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/22/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5295c9c60349 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 074fb392fa2c88e6277c4edd4728159529f6ea60 |
   | Default Java | Private Build-1.8

[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836429#comment-17836429
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

hadoop-yetus commented on PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#issuecomment-2051088177

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 53s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  
hadoop-tools/hadoop-azure: The patch generated 0 new + 4 unchanged - 3 fixed = 
4 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 32s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 139m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6676/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6676 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs compile javac javadoc mvninstall shadedclient spotbugs 
checkstyle markdownlint |
   | uname | Linux a9a7e15ecad3 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 
20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / abb6c113befda6fb2e94bd91b4aaec63ab69520c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results |

Re: [PR] HADOOP-19129: [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#issuecomment-2051088177

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 53s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  
hadoop-tools/hadoop-azure: The patch generated 0 new + 4 unchanged - 3 fixed = 
4 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 32s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 139m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6676/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6676 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs compile javac javadoc mvninstall shadedclient spotbugs 
checkstyle markdownlint |
   | uname | Linux a9a7e15ecad3 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 
20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / abb6c113befda6fb2e94bd91b4aaec63ab69520c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6676/10/testReport/ |
   | Max. process+thread count | 553 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.ap

[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836423#comment-17836423
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2051059398

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  39m  4s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/21/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 28 new + 17 unchanged - 0 
fixed = 45 total (was 17)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  6s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/21/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  39m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 145m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.fileStatusInformationPresent;
 locked 66% of time  Unsynchronized access at AbfsInputStream.java:66% of time  
Unsynchronized access at AbfsInputStream.java:[line 691] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/21/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 3ee62d6ee49b 5.15.0-94-generi

Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2051059398

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  39m  4s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/21/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 28 new + 17 unchanged - 0 
fixed = 45 total (was 17)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  6s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/21/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  39m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 145m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.fileStatusInformationPresent;
 locked 66% of time  Unsynchronized access at AbfsInputStream.java:66% of time  
Unsynchronized access at AbfsInputStream.java:[line 691] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/21/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 3ee62d6ee49b 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 38f5592209338d99bedf8497d826c8cc0eb6b16c |
   | Default Java | Private Build-1.8

Re: [PR] HDFS-17383:Datanode current block token should come from active NameNode in HA mode [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6562:
URL: https://github.com/apache/hadoop/pull/6562#issuecomment-205102

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  7s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/7/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 455 unchanged 
- 0 fixed = 458 total (was 455)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 251m 37s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 427m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6562 |
   | JIRA Issue | HDFS-17383 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0cf9a0341f16 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6b1ed1a1cc4aed7414936f60ce07b58214e927d8 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/7/testReport/ |
   | Max. process+thread count | 3049 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automa

[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836410#comment-17836410
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

anujmodi2021 commented on code in PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#discussion_r1562021850


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java:
##
@@ -102,11 +103,31 @@ public void testListPathWithValidListMaxResultsValues()
   setListMaxResults(listMaxResults);
   int expectedListResultsSize =
   listMaxResults > fileCount ? fileCount : listMaxResults;
-  Assertions.assertThat(listPath(directory.toString())).describedAs(
-  "AbfsClient.listPath result should contain %d items when "
-  + "listMaxResults is %d and directory contains %d items",
-  expectedListResultsSize, listMaxResults, fileCount)
-  .hasSize(expectedListResultsSize);
+
+  AbfsRestOperation op = getFileSystem().getAbfsClient().listPath(
+  directory.toString(), false, getListMaxResults(), null,
+  getTestTracingContext(getFileSystem(), true));
+
+  List list = 
op.getResult().getListResultSchema().paths();
+  String continuationToken = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
+
+  if (continuationToken == null) {
+// Listing is complete and number of objects should be same as expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " when listMaxResults is %d, directory contains %d items and 
"
++ "listing is complete",
+expectedListResultsSize, listMaxResults, fileCount)
+.hasSize(expectedListResultsSize);
+  } else {
+// Listing is incomplete and number of objects can be lesser than 
expected

Review Comment:
   Fixed
   



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java:
##
@@ -102,11 +103,31 @@ public void testListPathWithValidListMaxResultsValues()
   setListMaxResults(listMaxResults);
   int expectedListResultsSize =
   listMaxResults > fileCount ? fileCount : listMaxResults;
-  Assertions.assertThat(listPath(directory.toString())).describedAs(
-  "AbfsClient.listPath result should contain %d items when "
-  + "listMaxResults is %d and directory contains %d items",
-  expectedListResultsSize, listMaxResults, fileCount)
-  .hasSize(expectedListResultsSize);
+
+  AbfsRestOperation op = getFileSystem().getAbfsClient().listPath(
+  directory.toString(), false, getListMaxResults(), null,
+  getTestTracingContext(getFileSystem(), true));
+
+  List list = 
op.getResult().getListResultSchema().paths();
+  String continuationToken = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
+
+  if (continuationToken == null) {
+// Listing is complete and number of objects should be same as expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " when listMaxResults is %d, directory contains %d items and 
"
++ "listing is complete",
+expectedListResultsSize, listMaxResults, fileCount)
+.hasSize(expectedListResultsSize);
+  } else {
+// Listing is incomplete and number of objects can be lesser than 
expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " or lesser when listMaxResults is %d,  directory contains"

Review Comment:
   Fixed



##
hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md:
##
@@ -634,6 +631,8 @@ kept out of the source tree then referenced through an 
XInclude element:
New files created in folder accountSettings is listed in .gitignore to
prevent accidental cred leaks.
 
+You are all set to run the test srcipt.

Review Comment:
   Fixed





> ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite
> 
>
> Key: HADOOP-19129
> URL: https://issues.apache.org/jira/browse/HADOOP-19129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Test Script used by ABFS to validate changes has following two issues:
>  # When there are a lot of test failures or when error message of any failing 
> test becomes very large, the regex used today to filter te

Re: [PR] HDFS-17383:Datanode current block token should come from active NameNode in HA mode [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6562:
URL: https://github.com/apache/hadoop/pull/6562#issuecomment-2050991527

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  4s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/8/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 456 unchanged 
- 0 fixed = 459 total (was 456)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 229m  3s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 372m  7s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6562 |
   | JIRA Issue | HDFS-17383 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f23da8b11928 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6b1ed1a1cc4aed7414936f60ce07b58214e927d8 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/8/testReport/ |
   | Max. process+thread count | 4448 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automa

[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836409#comment-17836409
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

anujmodi2021 commented on code in PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#discussion_r1562021413


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java:
##
@@ -136,10 +136,10 @@ public void testAbfsStreamOps() throws Exception {
   }
 
   if 
(fs.getAbfsStore().isAppendBlobKey(fs.makeQualified(largeOperationsFile).toString()))
 {
-// for appendblob data is already flushed, so there is more data to 
read.
+// for appendblob data is already flushed, so there might be more data 
to read.
 assertTrue(String.format("The actual value of %d was not equal to the "
   + "expected value", statistics.getReadOps()),
-  statistics.getReadOps() == (largeValue + 3) || 
statistics.getReadOps() == (largeValue + 4));
+  statistics.getReadOps() >= largeValue  || statistics.getReadOps() <= 
(largeValue + 4));

Review Comment:
   For append blobs data is available to read as soon as it is appended. So in 
case when data is appended and not yet flushed, it can still be read in append 
blobs. This can lead to an additional read call for append blobs.
   
   
   



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java:
##
@@ -532,4 +528,36 @@ protected long assertAbfsStatistics(AbfsStatistic 
statistic,
 (long) metricMap.get(statistic.getStatName()));
 return expectedValue;
   }
+
+  protected void assumeValidTestConfigPresent(final Configuration conf, final 
String key) {
+String configuredValue = conf.get(key);

Review Comment:
   Taken





> ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite
> 
>
> Key: HADOOP-19129
> URL: https://issues.apache.org/jira/browse/HADOOP-19129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Test Script used by ABFS to validate changes has following two issues:
>  # When there are a lot of test failures or when error message of any failing 
> test becomes very large, the regex used today to filter test results does not 
> work as expected and fails to report all the failing tests.
> To resolve this, we have come up with new regex that will only target one 
> line test names for reporting them into aggregated test results.
>  # While running the test suite for different combinations of Auth type and 
> account type, we add the combination specific configs first and then include 
> the account specific configs in core-site.xml file. This will override the 
> combination specific configs like auth type if the same config is present in 
> account specific config file. To avoid this, we will first include the 
> account specific configs and then add the combination specific configs.
> Due to above bug in test script, some test failures in ABFS were not getting 
> our attention. This PR also targets to resolve them. Following are the tests 
> fixed:
>  # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete(): It 
> was failing only when append blobs were enabled. In case of append blobs we 
> were not closing the active block on outputstrea,close() due to which 
> block.close() was not getting called and assertions around it were failing. 
> Fixed by updating the production code to close the active block on flush.
>  # ITestAzureBlobFileSystemAuthorization: Tests in this class works with an 
> existing remote filesystem instead of creating a new file system instance. 
> For this they require file system configured in account settings using 
> following config: "fs.contract.test.fs.abfs". Tests weref ailing with NPE 
> when this config was not present. Updated code to skip thsi test if required 
> config is not present.
>  # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): Test was 
> failing Intermittently only for HNS enabled accounts. Test wants to assert 
> that client.listPath() does not return more objects than what is configured 
> in maxListResults. Assertions should be that number of objects returned could 
> be less than expected as server might end up returning even lesser due to 
> partition splits along with a continuation token.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue(): Fail 
> when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITe

Re: [PR] HADOOP-19129: [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-11 Thread via GitHub


anujmodi2021 commented on code in PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#discussion_r1562021413


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsStreamStatistics.java:
##
@@ -136,10 +136,10 @@ public void testAbfsStreamOps() throws Exception {
   }
 
   if 
(fs.getAbfsStore().isAppendBlobKey(fs.makeQualified(largeOperationsFile).toString()))
 {
-// for appendblob data is already flushed, so there is more data to 
read.
+// for appendblob data is already flushed, so there might be more data 
to read.
 assertTrue(String.format("The actual value of %d was not equal to the "
   + "expected value", statistics.getReadOps()),
-  statistics.getReadOps() == (largeValue + 3) || 
statistics.getReadOps() == (largeValue + 4));
+  statistics.getReadOps() >= largeValue  || statistics.getReadOps() <= 
(largeValue + 4));

Review Comment:
   For append blobs data is available to read as soon as it is appended. So in 
case when data is appended and not yet flushed, it can still be read in append 
blobs. This can lead to an additional read call for append blobs.
   
   
   



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java:
##
@@ -532,4 +528,36 @@ protected long assertAbfsStatistics(AbfsStatistic 
statistic,
 (long) metricMap.get(statistic.getStatName()));
 return expectedValue;
   }
+
+  protected void assumeValidTestConfigPresent(final Configuration conf, final 
String key) {
+String configuredValue = conf.get(key);

Review Comment:
   Taken



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19129: [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-11 Thread via GitHub


anujmodi2021 commented on code in PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#discussion_r1562021850


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java:
##
@@ -102,11 +103,31 @@ public void testListPathWithValidListMaxResultsValues()
   setListMaxResults(listMaxResults);
   int expectedListResultsSize =
   listMaxResults > fileCount ? fileCount : listMaxResults;
-  Assertions.assertThat(listPath(directory.toString())).describedAs(
-  "AbfsClient.listPath result should contain %d items when "
-  + "listMaxResults is %d and directory contains %d items",
-  expectedListResultsSize, listMaxResults, fileCount)
-  .hasSize(expectedListResultsSize);
+
+  AbfsRestOperation op = getFileSystem().getAbfsClient().listPath(
+  directory.toString(), false, getListMaxResults(), null,
+  getTestTracingContext(getFileSystem(), true));
+
+  List list = 
op.getResult().getListResultSchema().paths();
+  String continuationToken = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
+
+  if (continuationToken == null) {
+// Listing is complete and number of objects should be same as expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " when listMaxResults is %d, directory contains %d items and 
"
++ "listing is complete",
+expectedListResultsSize, listMaxResults, fileCount)
+.hasSize(expectedListResultsSize);
+  } else {
+// Listing is incomplete and number of objects can be lesser than 
expected

Review Comment:
   Fixed
   



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java:
##
@@ -102,11 +103,31 @@ public void testListPathWithValidListMaxResultsValues()
   setListMaxResults(listMaxResults);
   int expectedListResultsSize =
   listMaxResults > fileCount ? fileCount : listMaxResults;
-  Assertions.assertThat(listPath(directory.toString())).describedAs(
-  "AbfsClient.listPath result should contain %d items when "
-  + "listMaxResults is %d and directory contains %d items",
-  expectedListResultsSize, listMaxResults, fileCount)
-  .hasSize(expectedListResultsSize);
+
+  AbfsRestOperation op = getFileSystem().getAbfsClient().listPath(
+  directory.toString(), false, getListMaxResults(), null,
+  getTestTracingContext(getFileSystem(), true));
+
+  List list = 
op.getResult().getListResultSchema().paths();
+  String continuationToken = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
+
+  if (continuationToken == null) {
+// Listing is complete and number of objects should be same as expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " when listMaxResults is %d, directory contains %d items and 
"
++ "listing is complete",
+expectedListResultsSize, listMaxResults, fileCount)
+.hasSize(expectedListResultsSize);
+  } else {
+// Listing is incomplete and number of objects can be lesser than 
expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " or lesser when listMaxResults is %d,  directory contains"

Review Comment:
   Fixed



##
hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md:
##
@@ -634,6 +631,8 @@ kept out of the source tree then referenced through an 
XInclude element:
New files created in folder accountSettings is listed in .gitignore to
prevent accidental cred leaks.
 
+You are all set to run the test srcipt.

Review Comment:
   Fixed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17461. Fix spotbugs in PeerCache#getInternal [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6721:
URL: https://github.com/apache/hadoop/pull/6721#issuecomment-2050962070

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   2m 51s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6721/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  37m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 50s |  |  
hadoop-hdfs-project/hadoop-hdfs-client generated 0 new + 0 unchanged - 1 fixed 
= 0 total (was 1)  |
   | +1 :green_heart: |  shadedclient  |  39m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 144m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6721/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6721 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 05f56d1e9843 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c5e1b9d29fe1f62c1dd40d55c8b4be8c7b77f943 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6721/2/testReport/ |
   | Max. process+thread count | 555 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/ha

[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836404#comment-17836404
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

anujmodi2021 commented on code in PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#discussion_r1561990652


##
hadoop-tools/hadoop-azure/src/site/markdown/index.md:
##
@@ -18,8 +18,8 @@
 
 See also:
 
-* [ABFS](./abfs.html)
-* [Testing](./testing_azure.html)
+* [ABFS](./abfs.md)

Review Comment:
   Reverting this change.
   But I observed these links are not working on github.
   
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/site/markdown/index.md
   
   They are giving 404
   





> ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite
> 
>
> Key: HADOOP-19129
> URL: https://issues.apache.org/jira/browse/HADOOP-19129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Test Script used by ABFS to validate changes has following two issues:
>  # When there are a lot of test failures or when error message of any failing 
> test becomes very large, the regex used today to filter test results does not 
> work as expected and fails to report all the failing tests.
> To resolve this, we have come up with new regex that will only target one 
> line test names for reporting them into aggregated test results.
>  # While running the test suite for different combinations of Auth type and 
> account type, we add the combination specific configs first and then include 
> the account specific configs in core-site.xml file. This will override the 
> combination specific configs like auth type if the same config is present in 
> account specific config file. To avoid this, we will first include the 
> account specific configs and then add the combination specific configs.
> Due to above bug in test script, some test failures in ABFS were not getting 
> our attention. This PR also targets to resolve them. Following are the tests 
> fixed:
>  # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete(): It 
> was failing only when append blobs were enabled. In case of append blobs we 
> were not closing the active block on outputstrea,close() due to which 
> block.close() was not getting called and assertions around it were failing. 
> Fixed by updating the production code to close the active block on flush.
>  # ITestAzureBlobFileSystemAuthorization: Tests in this class works with an 
> existing remote filesystem instead of creating a new file system instance. 
> For this they require file system configured in account settings using 
> following config: "fs.contract.test.fs.abfs". Tests weref ailing with NPE 
> when this config was not present. Updated code to skip thsi test if required 
> config is not present.
>  # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): Test was 
> failing Intermittently only for HNS enabled accounts. Test wants to assert 
> that client.listPath() does not return more objects than what is configured 
> in maxListResults. Assertions should be that number of objects returned could 
> be less than expected as server might end up returning even lesser due to 
> partition splits along with a continuation token.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue(): Fail 
> when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsFalse(): 
> Fail when "fs.azure.test.namespace.enabled" config is missing. Ignore the 
> test if config is missing.
>  # ITestGetNameSpaceEnabled.testNonXNSAccount(): Fail when 
> "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITestAbfsStreamStatistics.testAbfsStreamOps: Fails when 
> "fs.azure.test.appendblob.enabled" is set to true. Test wanted to assert that 
> number of read operations can be more in case of append blobs as compared to 
> normal blob because of automatic flush. It could be same as that of normal 
> blob as well.
>  # ITestAzureBlobFileSystemCheckAccess.testCheckAccessForAccountWithoutNS: 
> Fails for FNS Account only when following config is present:  
> fs.azure.account.hns.enabled". Failure is because test wants to assert that 
> when driver does not know if the account is HNS enabled or not it makes a 
> server call and fails. But above config is letting driver know the account 
> type and skipping the head call. Remove these configs from the test specific 
> configurations and not 

Re: [PR] HADOOP-19129: [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-11 Thread via GitHub


anujmodi2021 commented on code in PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#discussion_r1561990652


##
hadoop-tools/hadoop-azure/src/site/markdown/index.md:
##
@@ -18,8 +18,8 @@
 
 See also:
 
-* [ABFS](./abfs.html)
-* [Testing](./testing_azure.html)
+* [ABFS](./abfs.md)

Review Comment:
   Reverting this change.
   But I observed these links are not working on github.
   
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/site/markdown/index.md
   
   They are giving 404
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17397. Choose another DN as soon as possible, when encountering network issues [hadoop]

2024-04-11 Thread via GitHub


tangphucnhan commented on PR #6591:
URL: https://github.com/apache/hadoop/pull/6591#issuecomment-2050879234

   Thanks!
   
   On Thu, Mar 28, 2024 at 4:17 PM Apache Hadoop Yetus Account <
   ***@***.***> wrote:
   
   > đź’” *-1 overall*
   > Vote Subsystem Runtime Logfile Comment
   > +0 🆗 reexec 0m 31s Docker mode activated.
   > _ Prechecks _
   > +1 đź’š dupname 0m 0s No case conflicting files found.
   > +0 🆗 codespell 0m 1s codespell was not available.
   > +0 🆗 detsecrets 0m 1s detect-secrets was not available.
   > +1 đź’š @author  0m 0s The patch does not
   > contain any @author  tags.
   > -1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or
   > modified tests. Please justify why no new tests are needed for this patch.
   > Also please list what manual steps were performed to verify this patch.
   > _ trunk Compile Tests _
   > +1 đź’š mvninstall 44m 30s trunk passed
   > +1 đź’š compile 1m 1s trunk passed with JDK
   > Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
   > +1 đź’š compile 0m 57s trunk passed with JDK Private
   > Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > +1 đź’š checkstyle 0m 34s trunk passed
   > +1 đź’š mvnsite 0m 59s trunk passed
   > +1 đź’š javadoc 0m 50s trunk passed with JDK
   > Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
   > +1 đź’š javadoc 0m 44s trunk passed with JDK Private
   > Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > -1 ❌ spotbugs 2m 38s
   > /branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
   > 

 hadoop-hdfs-project/hadoop-hdfs-client
   > in trunk has 1 extant spotbugs warnings.
   > +1 đź’š shadedclient 34m 49s branch has no errors when building and testing
   > our client artifacts.
   > _ Patch Compile Tests _
   > +1 đź’š mvninstall 0m 49s the patch passed
   > +1 đź’š compile 0m 53s the patch passed with JDK
   > Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
   > +1 đź’š javac 0m 53s the patch passed
   > +1 đź’š compile 0m 45s the patch passed with JDK Private
   > Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > +1 đź’š javac 0m 45s the patch passed
   > +1 đź’š blanks 0m 0s The patch has no blanks issues.
   > +1 đź’š checkstyle 0m 21s the patch passed
   > +1 đź’š mvnsite 0m 47s the patch passed
   > +1 đź’š javadoc 0m 36s the patch passed with JDK
   > Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
   > +1 đź’š javadoc 0m 35s the patch passed with JDK Private
   > Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > +1 đź’š spotbugs 2m 34s the patch passed
   > +1 đź’š shadedclient 34m 38s patch has no errors when building and testing
   > our client artifacts.
   > _ Other Tests _
   > +1 đź’š unit 2m 25s hadoop-hdfs-client in the patch passed.
   > +1 đź’š asflicense 0m 37s The patch does not generate ASF License warnings.
   > 135m 1s
   > Subsystem Report/Notes
   > Docker ClientAPI=1.45 ServerAPI=1.45 base:
   > 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/12/artifact/out/Dockerfile
   > GITHUB PR #6591 
   > Optional Tests dupname asflicense compile javac javadoc mvninstall
   > mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
   > uname Linux 8e980caff1e4 5.15.0-94-generic #104
   > -Ubuntu SMP Tue Jan 9 15:25:40
   > UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
   > Build tool maven
   > Personality dev-support/bin/hadoop.sh
   > git revision trunk / 73d6c12
   > 

   > Default Java Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > Multi-JDK versions 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
   > /usr/lib/jvm/java-8-openjdk-amd64:Private
   > Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > Test Results
   > 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/12/testReport/
   > Max. process+thread count 552 (vs. ulimit of 5500)
   > modules C: hadoop-hdfs-project/hadoop-hdfs-client U:
   > hadoop-hdfs-project/hadoop-hdfs-client
   > Console output
   > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/12/console
   > versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
   > Powered by Apache Yetus 0.14.0 https://yetus.apache.org
   >
   > This message was automatically generated.
   >
   > —
   > Reply to this email directly, view it on GitHub
   > , or
   > unsubscribe
   > 

   > .
   > You are receiving this because you are subscribed to this thread.Message
   > ID: ***@***.***>
   >
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, pleas

Re: [PR] HDFS-17424 [FGL] DelegationTokenSecretManager supports fine-grained lock [hadoop]

2024-04-11 Thread via GitHub


ferhui commented on code in PR #6696:
URL: https://github.com/apache/hadoop/pull/6696#discussion_r1561943317


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenSecretManager.java:
##
@@ -401,7 +402,10 @@ protected void logExpireToken(final 
DelegationTokenIdentifier dtId)
   // closes the edit log files. Doing this inside the
   // fsn lock will prevent being interrupted when stopping
   // the secret manager.
-  namesystem.readLockInterruptibly();
+  // TODO: delegation token is a very independent system, so
+  // it's proper to use an seperated r/w lock instead of fs lock
+  // for getting/renewing/expiring/canceling token or updating master key.

Review Comment:
   @yuanboliu @ZanderXu modify the comments or keep it there ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17461. Fix spotbugs in PeerCache#getInternal [hadoop]

2024-04-11 Thread via GitHub


haiyang1987 commented on code in PR #6721:
URL: https://github.com/apache/hadoop/pull/6721#discussion_r1561931239


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/PeerCache.java:
##
@@ -155,7 +155,7 @@ public Peer get(DatanodeID dnId, boolean isDomain) {
 
   private synchronized Peer getInternal(DatanodeID dnId, boolean isDomain) {
 List sockStreamList = multimap.get(new Key(dnId, isDomain));
-if (sockStreamList == null) {
+if (sockStreamList.isEmpty()) {
   return null;

Review Comment:
   Thanks @ayushtkn for your comment.
   Update PR, please help me review it again, thanks~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17461. Fix spotbugs in PeerCache#getInternal [hadoop]

2024-04-11 Thread via GitHub


ayushtkn commented on code in PR #6721:
URL: https://github.com/apache/hadoop/pull/6721#discussion_r1561813524


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/PeerCache.java:
##
@@ -155,7 +155,7 @@ public Peer get(DatanodeID dnId, boolean isDomain) {
 
   private synchronized Peer getInternal(DatanodeID dnId, boolean isDomain) {
 List sockStreamList = multimap.get(new Key(dnId, isDomain));
-if (sockStreamList == null) {
+if (sockStreamList.isEmpty()) {
   return null;

Review Comment:
   We can just drop this if check itself, the below logic can safely handle an 
empty list



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#issuecomment-2050651681

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 46s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 53s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m  8s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6716/3/artifact/out/blanks-eol.txt)
 |  The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   4m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   7m 47s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 32s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 228m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6716/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6716 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 5470963506ac 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3eb8b8742375fbdf056f9241b9f3c847cde5034e |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6716/3/testReport/ |
   | Max. process+thread count | 1367 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hado

Re: [PR] HADOOP-18235. vulnerability: we may leak sensitive information in LocalKeyStoreProvider [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #4998:
URL: https://github.com/apache/hadoop/pull/4998#issuecomment-2050626727

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m 40s | 
[/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html)
 |  hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  21m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 146m 57s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-common-project/hadoop-common |
   |  |  Exceptional return value of java.io.File.createNewFile() ignored in 
org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush()  At 
LocalKeyStoreProvider.java:ignored in 
org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush()  At 
LocalKeyStoreProvider.java:[line 147] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4998 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8a78c1cdd11e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5d045909b32ff03a576e18822b4235a5c6dc07bf |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
h

[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836365#comment-17836365
 ] 

ASF GitHub Bot commented on HADOOP-18235:
-

hadoop-yetus commented on PR #4998:
URL: https://github.com/apache/hadoop/pull/4998#issuecomment-2050626727

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m 40s | 
[/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html)
 |  hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  21m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 146m 57s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-common-project/hadoop-common |
   |  |  Exceptional return value of java.io.File.createNewFile() ignored in 
org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush()  At 
LocalKeyStoreProvider.java:ignored in 
org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush()  At 
LocalKeyStoreProvider.java:[line 147] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4998 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8a78c1cdd11e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5d045909b32ff03a576e18822b4235a5c6dc07bf |
   | Default Java | Private Build

Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#issuecomment-2050579864

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 30s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   2m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m 10s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6716/4/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   5m 46s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  4s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 145m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6716/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6716 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 8e0cc7ab2749 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0ed84a98c5e630590f376a445eb28c27c2e8e780 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6716/4/testReport/ |
   | Max. process+thread count | 1620 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hado

[jira] [Commented] (HADOOP-19146) noaa-cors-pds bucket access with global endpoint fails

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836358#comment-17836358
 ] 

ASF GitHub Bot commented on HADOOP-19146:
-

hadoop-yetus commented on PR #6723:
URL: https://github.com/apache/hadoop/pull/6723#issuecomment-2050515997

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 13s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6723/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6723 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux bd88072bceb4 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f3b15ae1853cd1fb3abb5207c444e4062c1d6a4e |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6723/1/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6723/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> noaa-cors-pds bucket access with glob

Re: [PR] HADOOP-19146 noaa-cors-pds bucket access with global endpoint fails [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6723:
URL: https://github.com/apache/hadoop/pull/6723#issuecomment-2050515997

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 13s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6723/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6723 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux bd88072bceb4 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f3b15ae1853cd1fb3abb5207c444e4062c1d6a4e |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6723/1/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6723/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:

Re: [PR] HDFS-17439. Support -nonSuperUser for NNThroughputBenchmark: useful f… [hadoop]

2024-04-11 Thread via GitHub


fateh288 commented on PR #6677:
URL: https://github.com/apache/hadoop/pull/6677#issuecomment-2050492580

   Requesting review on this patch.
   The style check failures are from legacy code and not introduced in this 
patch specifically.
   The unit test failures are also unrelated (the same patch passed the unit 
tests previously and no logic changes done in the follow up patch - only style 
changes fixed)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19124) Update org.ehcache from 3.3.1 to 3.8.2.

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836333#comment-17836333
 ] 

ASF GitHub Bot commented on HADOOP-19124:
-

slfan1989 commented on PR #6705:
URL: https://github.com/apache/hadoop/pull/6705#issuecomment-2050376220

   @steveloughran 
   
   > you don't have to: for anything complex i push up so yetus runs all the 
tests. I just cherrypick on the terminal to get things over fast, running any 
new tests by hand
   > 
   > what's key is this: the PR is in, we shouldn't be re-reviewing it, as if 
there are changes they should start at trunk and go backwards. So as long as 
yetus is happy, and you've done any other tests you need (cloud storage...) 
then you can merge without waiting for any +1 from others.
   > 
   > sometimes I do merge the main commit and followups into a single commit on 
the older lines, e.g 
[33bbcfa](https://github.com/apache/hadoop/commit/33bbcfa4b042d0677e659569c9ca6fd730707ea2)
 this just makes it easier to manage and track
   
   I apologize for my delayed response. branch-3.4 has been completed, but when 
I attempted to backport to branch-3.3, I encountered complications because many 
modifications related to YARN Federation are not present on branch-3.3. This 
has made the pull request somewhat complex, but I will continue working on 
completing this task.




> Update org.ehcache from 3.3.1 to 3.8.2.
> ---
>
> Key: HADOOP-19124
> URL: https://issues.apache.org/jira/browse/HADOOP-19124
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.1
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> We need to enhance the caching functionality in Yarn Federation by adding a 
> limit on the number of cached entries. I noticed that the version of 
> org.ehcache is relatively old and requires an upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19124. Update org.ehcache from 3.3.1 to 3.8.2. (#6665) [hadoop]

2024-04-11 Thread via GitHub


slfan1989 commented on PR #6705:
URL: https://github.com/apache/hadoop/pull/6705#issuecomment-2050376220

   @steveloughran 
   
   > you don't have to: for anything complex i push up so yetus runs all the 
tests. I just cherrypick on the terminal to get things over fast, running any 
new tests by hand
   > 
   > what's key is this: the PR is in, we shouldn't be re-reviewing it, as if 
there are changes they should start at trunk and go backwards. So as long as 
yetus is happy, and you've done any other tests you need (cloud storage...) 
then you can merge without waiting for any +1 from others.
   > 
   > sometimes I do merge the main commit and followups into a single commit on 
the older lines, e.g 
[33bbcfa](https://github.com/apache/hadoop/commit/33bbcfa4b042d0677e659569c9ca6fd730707ea2)
 this just makes it easier to manage and track
   
   I apologize for my delayed response. branch-3.4 has been completed, but when 
I attempted to backport to branch-3.3, I encountered complications because many 
modifications related to YARN Federation are not present on branch-3.3. This 
has made the pull request somewhat complex, but I will continue working on 
completing this task.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18708) AWS SDK V2 - Implement CSE

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836332#comment-17836332
 ] 

ASF GitHub Bot commented on HADOOP-18708:
-

steveloughran commented on code in PR #6164:
URL: https://github.com/apache/hadoop/pull/6164#discussion_r1561533019


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/EncryptionS3ClientFactory.java:
##
@@ -0,0 +1,123 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+import java.net.URI;
+
+import software.amazon.awssdk.services.s3.S3AsyncClient;
+import software.amazon.awssdk.services.s3.S3Client;
+import software.amazon.encryption.s3.S3AsyncEncryptionClient;
+import software.amazon.encryption.s3.S3EncryptionClient;
+
+import org.apache.hadoop.fs.s3a.impl.CSEMaterials;
+
+import static 
org.apache.hadoop.fs.s3a.impl.InstantiationIOException.unavailable;
+
+public class EncryptionS3ClientFactory extends DefaultS3ClientFactory {

Review Comment:
   javadocs, which include mentioning that this needs the cse on the runtime



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/CSEMaterials.java:
##
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+/**
+ * This class is for storing information about key type and corresponding key
+ * to be used for client side encryption.
+ */
+public class CSEMaterials {

Review Comment:
   does this stuff get passed through delegation tokens? as they can be used to 
pass encryption secrets into a cluster





> AWS SDK V2 - Implement CSE
> --
>
> Key: HADOOP-18708
> URL: https://issues.apache.org/jira/browse/HADOOP-18708
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> S3 Encryption client for SDK V2 is now available, so add client side 
> encryption back in. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18708. AWS SDK V2 - Implement CSE [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on code in PR #6164:
URL: https://github.com/apache/hadoop/pull/6164#discussion_r1561533019


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/EncryptionS3ClientFactory.java:
##
@@ -0,0 +1,123 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+import java.net.URI;
+
+import software.amazon.awssdk.services.s3.S3AsyncClient;
+import software.amazon.awssdk.services.s3.S3Client;
+import software.amazon.encryption.s3.S3AsyncEncryptionClient;
+import software.amazon.encryption.s3.S3EncryptionClient;
+
+import org.apache.hadoop.fs.s3a.impl.CSEMaterials;
+
+import static 
org.apache.hadoop.fs.s3a.impl.InstantiationIOException.unavailable;
+
+public class EncryptionS3ClientFactory extends DefaultS3ClientFactory {

Review Comment:
   javadocs, which include mentioning that this needs the cse on the runtime



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/CSEMaterials.java:
##
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+/**
+ * This class is for storing information about key type and corresponding key
+ * to be used for client side encryption.
+ */
+public class CSEMaterials {

Review Comment:
   does this stuff get passed through delegation tokens? as they can be used to 
pass encryption secrets into a cluster



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18708) AWS SDK V2 - Implement CSE

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836329#comment-17836329
 ] 

ASF GitHub Bot commented on HADOOP-18708:
-

steveloughran commented on PR #6164:
URL: https://github.com/apache/hadoop/pull/6164#issuecomment-2050354049

   > java.util.concurrent.TimeoutException
   
   easily added to the check
   
   > ITestS3AContractVectoredRead.testEOFRanges416Handling fails because S3EC 
does not throw an exception is range is greater than EOF.
   
   just less data? not ideal.
   




> AWS SDK V2 - Implement CSE
> --
>
> Key: HADOOP-18708
> URL: https://issues.apache.org/jira/browse/HADOOP-18708
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> S3 Encryption client for SDK V2 is now available, so add client side 
> encryption back in. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19146) noaa-cors-pds bucket access with global endpoint fails

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836328#comment-17836328
 ] 

ASF GitHub Bot commented on HADOOP-19146:
-

virajjasani commented on PR #6723:
URL: https://github.com/apache/hadoop/pull/6723#issuecomment-2050352353

   Tested with scale profile, with and without global endpoint setting




> noaa-cors-pds bucket access with global endpoint fails
> --
>
> Key: HADOOP-19146
> URL: https://issues.apache.org/jira/browse/HADOOP-19146
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> All tests accessing noaa-cors-pds use us-east-1 region, as configured at 
> bucket level. If global endpoint is configured (e.g. us-west-2), they fail to 
> access to bucket.
>  
> Sample error:
> {code:java}
> org.apache.hadoop.fs.s3a.AWSRedirectException: Received permanent redirect 
> response to region [us-east-1].  This likely indicates that the S3 region 
> configured in fs.s3a.endpoint.region does not match the AWS region containing 
> the bucket.: null (Service: S3, Status Code: 301, Request ID: 
> PMRWMQC9S91CNEJR, Extended Request ID: 
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:253)
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:155)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4041)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3947)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getFileStatus$26(S3AFileSystem.java:3924)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3922)
>     at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115)
>     at org.apache.hadoop.fs.Globber.doGlob(Globber.java:349)
>     at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$globStatus$35(S3AFileSystem.java:4956)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:4949)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:313)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:281)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:445)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:311)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:328)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:201)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1677)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1674)
>  {code}
> {code:java}
> Caused by: software.amazon.awssdk.services.s3.model.S3Exception: null 
> (Service: S3, Status Code: 301, Request ID: PMRWMQC9S91CNEJR, Extended 
> Request ID: 
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
>     at 
> software.amazon.awssdk.

Re: [PR] HADOOP-18708. AWS SDK V2 - Implement CSE [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on PR #6164:
URL: https://github.com/apache/hadoop/pull/6164#issuecomment-2050354049

   > java.util.concurrent.TimeoutException
   
   easily added to the check
   
   > ITestS3AContractVectoredRead.testEOFRanges416Handling fails because S3EC 
does not throw an exception is range is greater than EOF.
   
   just less data? not ideal.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19146) noaa-cors-pds bucket access with global endpoint fails

2024-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19146:

Labels: pull-request-available  (was: )

> noaa-cors-pds bucket access with global endpoint fails
> --
>
> Key: HADOOP-19146
> URL: https://issues.apache.org/jira/browse/HADOOP-19146
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> All tests accessing noaa-cors-pds use us-east-1 region, as configured at 
> bucket level. If global endpoint is configured (e.g. us-west-2), they fail to 
> access to bucket.
>  
> Sample error:
> {code:java}
> org.apache.hadoop.fs.s3a.AWSRedirectException: Received permanent redirect 
> response to region [us-east-1].  This likely indicates that the S3 region 
> configured in fs.s3a.endpoint.region does not match the AWS region containing 
> the bucket.: null (Service: S3, Status Code: 301, Request ID: 
> PMRWMQC9S91CNEJR, Extended Request ID: 
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:253)
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:155)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4041)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3947)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getFileStatus$26(S3AFileSystem.java:3924)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3922)
>     at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115)
>     at org.apache.hadoop.fs.Globber.doGlob(Globber.java:349)
>     at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$globStatus$35(S3AFileSystem.java:4956)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:4949)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:313)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:281)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:445)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:311)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:328)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:201)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1677)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1674)
>  {code}
> {code:java}
> Caused by: software.amazon.awssdk.services.s3.model.S3Exception: null 
> (Service: S3, Status Code: 301, Request ID: PMRWMQC9S91CNEJR, Extended 
> Request ID: 
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:85)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredica

[jira] [Commented] (HADOOP-19124) Update org.ehcache from 3.3.1 to 3.8.2.

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836326#comment-17836326
 ] 

ASF GitHub Bot commented on HADOOP-19124:
-

steveloughran commented on PR #6705:
URL: https://github.com/apache/hadoop/pull/6705#issuecomment-2050349164

   >  I will backport pr on the command line in the future.
   
   you don't have to: for anything complex i push up so yetus runs all the 
tests. I just cherrypick on the terminal to get things over fast, running any 
new tests by hand
   
   what's key is this: the PR is in, we shouldn't be re-reviewing it, as if 
there are changes they should start at trunk and go backwards. So as long as 
yetus is happy, and you've done any other tests you need (cloud storage...) 
then you can merge without waiting for any +1 from others.
   
   sometimes I do merge the main commit and followups into a single commit on 
the older lines, e.g 33bbcfa4b042 this just makes it easier to manage and track




> Update org.ehcache from 3.3.1 to 3.8.2.
> ---
>
> Key: HADOOP-19124
> URL: https://issues.apache.org/jira/browse/HADOOP-19124
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.1
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> We need to enhance the caching functionality in Yarn Federation by adding a 
> limit on the number of cached entries. I noticed that the version of 
> org.ehcache is relatively old and requires an upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19146) noaa-cors-pds bucket access with global endpoint fails

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836327#comment-17836327
 ] 

ASF GitHub Bot commented on HADOOP-19146:
-

virajjasani opened a new pull request, #6723:
URL: https://github.com/apache/hadoop/pull/6723

   Jira: HADOOP-19146




> noaa-cors-pds bucket access with global endpoint fails
> --
>
> Key: HADOOP-19146
> URL: https://issues.apache.org/jira/browse/HADOOP-19146
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> All tests accessing noaa-cors-pds use us-east-1 region, as configured at 
> bucket level. If global endpoint is configured (e.g. us-west-2), they fail to 
> access to bucket.
>  
> Sample error:
> {code:java}
> org.apache.hadoop.fs.s3a.AWSRedirectException: Received permanent redirect 
> response to region [us-east-1].  This likely indicates that the S3 region 
> configured in fs.s3a.endpoint.region does not match the AWS region containing 
> the bucket.: null (Service: S3, Status Code: 301, Request ID: 
> PMRWMQC9S91CNEJR, Extended Request ID: 
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:253)
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:155)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4041)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3947)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getFileStatus$26(S3AFileSystem.java:3924)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3922)
>     at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115)
>     at org.apache.hadoop.fs.Globber.doGlob(Globber.java:349)
>     at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$globStatus$35(S3AFileSystem.java:4956)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:4949)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:313)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:281)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:445)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:311)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:328)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:201)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1677)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1674)
>  {code}
> {code:java}
> Caused by: software.amazon.awssdk.services.s3.model.S3Exception: null 
> (Service: S3, Status Code: 301, Request ID: PMRWMQC9S91CNEJR, Extended 
> Request ID: 
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.jav

Re: [PR] HADOOP-19146 noaa-cors-pds bucket access with global endpoint fails [hadoop]

2024-04-11 Thread via GitHub


virajjasani commented on PR #6723:
URL: https://github.com/apache/hadoop/pull/6723#issuecomment-2050352353

   Tested with scale profile, with and without global endpoint setting


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19146) noaa-cors-pds bucket access with global endpoint fails

2024-04-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-19146:
--
Component/s: test

> noaa-cors-pds bucket access with global endpoint fails
> --
>
> Key: HADOOP-19146
> URL: https://issues.apache.org/jira/browse/HADOOP-19146
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> All tests accessing noaa-cors-pds use us-east-1 region, as configured at 
> bucket level. If global endpoint is configured (e.g. us-west-2), they fail to 
> access to bucket.
>  
> Sample error:
> {code:java}
> org.apache.hadoop.fs.s3a.AWSRedirectException: Received permanent redirect 
> response to region [us-east-1].  This likely indicates that the S3 region 
> configured in fs.s3a.endpoint.region does not match the AWS region containing 
> the bucket.: null (Service: S3, Status Code: 301, Request ID: 
> PMRWMQC9S91CNEJR, Extended Request ID: 
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:253)
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:155)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4041)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3947)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getFileStatus$26(S3AFileSystem.java:3924)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3922)
>     at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115)
>     at org.apache.hadoop.fs.Globber.doGlob(Globber.java:349)
>     at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$globStatus$35(S3AFileSystem.java:4956)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:4949)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:313)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:281)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:445)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:311)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:328)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:201)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1677)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1674)
>  {code}
> {code:java}
> Caused by: software.amazon.awssdk.services.s3.model.S3Exception: null 
> (Service: S3, Status Code: 301, Request ID: PMRWMQC9S91CNEJR, Extended 
> Request ID: 
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:85)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:43)
>    

Re: [PR] HADOOP-19124. Update org.ehcache from 3.3.1 to 3.8.2. (#6665) [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on PR #6705:
URL: https://github.com/apache/hadoop/pull/6705#issuecomment-2050349164

   >  I will backport pr on the command line in the future.
   
   you don't have to: for anything complex i push up so yetus runs all the 
tests. I just cherrypick on the terminal to get things over fast, running any 
new tests by hand
   
   what's key is this: the PR is in, we shouldn't be re-reviewing it, as if 
there are changes they should start at trunk and go backwards. So as long as 
yetus is happy, and you've done any other tests you need (cloud storage...) 
then you can merge without waiting for any +1 from others.
   
   sometimes I do merge the main commit and followups into a single commit on 
the older lines, e.g 33bbcfa4b042 this just makes it easier to manage and track


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19146) noaa-cors-pds bucket access with global endpoint fails

2024-04-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-19146:
-

 Summary: noaa-cors-pds bucket access with global endpoint fails
 Key: HADOOP-19146
 URL: https://issues.apache.org/jira/browse/HADOOP-19146
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Viraj Jasani


All tests accessing noaa-cors-pds use us-east-1 region, as configured at bucket 
level. If global endpoint is configured (e.g. us-west-2), they fail to access 
to bucket.

 

Sample error:
{code:java}
org.apache.hadoop.fs.s3a.AWSRedirectException: Received permanent redirect 
response to region [us-east-1].  This likely indicates that the S3 region 
configured in fs.s3a.endpoint.region does not match the AWS region containing 
the bucket.: null (Service: S3, Status Code: 301, Request ID: PMRWMQC9S91CNEJR, 
Extended Request ID: 
6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
    at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:253)
    at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:155)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4041)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3947)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getFileStatus$26(S3AFileSystem.java:3924)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3922)
    at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115)
    at org.apache.hadoop.fs.Globber.doGlob(Globber.java:349)
    at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$globStatus$35(S3AFileSystem.java:4956)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
    at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
    at 
org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:4949)
    at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:313)
    at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:281)
    at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:445)
    at 
org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:311)
    at 
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:328)
    at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:201)
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1677)
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1674)
 {code}
{code:java}
Caused by: software.amazon.awssdk.services.s3.model.S3Exception: null (Service: 
S3, Status Code: 301, Request ID: PMRWMQC9S91CNEJR, Extended Request ID: 
6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
    at 
software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
    at 
software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
    at 
software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:85)
    at 
software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:43)
    at 
software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler$Crc32ValidationResponseHandler.handle(AwsSyncClientHandler.java:93)
    at 
software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$successTransformationResponseHandler$7(BaseClientHandler.java:279)
    ...
    ...
    ...
    at 
software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:53)

[jira] [Assigned] (HADOOP-19146) noaa-cors-pds bucket access with global endpoint fails

2024-04-11 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-19146:
-

Assignee: Viraj Jasani

> noaa-cors-pds bucket access with global endpoint fails
> --
>
> Key: HADOOP-19146
> URL: https://issues.apache.org/jira/browse/HADOOP-19146
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> All tests accessing noaa-cors-pds use us-east-1 region, as configured at 
> bucket level. If global endpoint is configured (e.g. us-west-2), they fail to 
> access to bucket.
>  
> Sample error:
> {code:java}
> org.apache.hadoop.fs.s3a.AWSRedirectException: Received permanent redirect 
> response to region [us-east-1].  This likely indicates that the S3 region 
> configured in fs.s3a.endpoint.region does not match the AWS region containing 
> the bucket.: null (Service: S3, Status Code: 301, Request ID: 
> PMRWMQC9S91CNEJR, Extended Request ID: 
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:253)
>     at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:155)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4041)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3947)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getFileStatus$26(S3AFileSystem.java:3924)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3922)
>     at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:115)
>     at org.apache.hadoop.fs.Globber.doGlob(Globber.java:349)
>     at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$globStatus$35(S3AFileSystem.java:4956)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
>     at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2716)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2735)
>     at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.globStatus(S3AFileSystem.java:4949)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:313)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:281)
>     at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:445)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:311)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:328)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:201)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1677)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1674)
>  {code}
> {code:java}
> Caused by: software.amazon.awssdk.services.s3.model.S3Exception: null 
> (Service: S3, Status Code: 301, Request ID: PMRWMQC9S91CNEJR, Extended 
> Request ID: 
> 6Xrg9thLiZXffBM9rbSCRgBqwTxdLAzm6OzWk9qYJz1kGex3TVfdiMtqJ+G4vaYCyjkqL8cteKI/NuPBQu5A0Q==)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:85)
>     at 
> software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:43)

[jira] [Commented] (HADOOP-19081) move ssh/sftp code out of hadoop-common into a dedicated jar

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836323#comment-17836323
 ] 

ASF GitHub Bot commented on HADOOP-19081:
-

steveloughran commented on code in PR #6693:
URL: https://github.com/apache/hadoop/pull/6693#discussion_r1561513799


##
hadoop-tools/hadoop-ftp/src/main/conf/log4j.properties:
##
@@ -0,0 +1,337 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review Comment:
   does this class get into the JAR? if so it should go into test/resources





> move ssh/sftp code out of hadoop-common into a dedicated jar
> 
>
> Key: HADOOP-19081
> URL: https://issues.apache.org/jira/browse/HADOOP-19081
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.4.0, 3.3.6
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> We could call it hadoop-ssh-common. This code is only used in 1 or 2 other 
> places and it means that hadoop-common (which is used in a lot of places) 
> leaks dependencies on ssh-core and jsch jars to many places.
> See [~steve_l] comments in HADOOP-19076



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19081. Move some hadoop-common code to new hadoop-ftp module [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on code in PR #6693:
URL: https://github.com/apache/hadoop/pull/6693#discussion_r1561513799


##
hadoop-tools/hadoop-ftp/src/main/conf/log4j.properties:
##
@@ -0,0 +1,337 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review Comment:
   does this class get into the JAR? if so it should go into test/resources



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7474. Improve Manifest committer resilience [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on PR #6716:
URL: https://github.com/apache/hadoop/pull/6716#issuecomment-2050326218

   Reviews invited from @mukund-thakur @anmolanmol1234 @anujmodi2021 
@HarshitGupta11
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19079) HttpExceptionUtils to check that loaded class is really an exception before instantiation

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836315#comment-17836315
 ] 

ASF GitHub Bot commented on HADOOP-19079:
-

steveloughran commented on PR #6557:
URL: https://github.com/apache/hadoop/pull/6557#issuecomment-2050311828

   > Junit has assertThrows though. Would that be a bit more Java friendly?
   
   one thing intercept does, which I haven't seen the others to, is include the 
toString() value of anything returned by the callable in the assertion. which 
lets you add tests that explicitly print their state on failures, rather than 
just "l-exp invoked didn't fail".
   Diagnostics information is too important to be lost...




> HttpExceptionUtils to check that loaded class is really an exception before 
> instantiation
> -
>
> Key: HADOOP-19079
> URL: https://issues.apache.org/jira/browse/HADOOP-19079
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common, security
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9, 3.5.0, 3.4.1
>
>
> It can be dangerous taking class names as inputs from HTTP messages even if 
> we control the source. Issue is in HttpExceptionUtils in hadoop-common 
> (validateResponse method).
> I can provide a PR that will highlight the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19079. Check class is an exception class before constructing an instance [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on PR #6557:
URL: https://github.com/apache/hadoop/pull/6557#issuecomment-2050311828

   > Junit has assertThrows though. Would that be a bit more Java friendly?
   
   one thing intercept does, which I haven't seen the others to, is include the 
toString() value of anything returned by the callable in the assertion. which 
lets you add tests that explicitly print their state on failures, rather than 
just "l-exp invoked didn't fail".
   Diagnostics information is too important to be lost...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-19079) HttpExceptionUtils to check that loaded class is really an exception before instantiation

2024-04-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-19079.
-
Fix Version/s: 3.3.9
   3.5.0
   3.4.1
   Resolution: Fixed

> HttpExceptionUtils to check that loaded class is really an exception before 
> instantiation
> -
>
> Key: HADOOP-19079
> URL: https://issues.apache.org/jira/browse/HADOOP-19079
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common, security
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9, 3.5.0, 3.4.1
>
>
> It can be dangerous taking class names as inputs from HTTP messages even if 
> we control the source. Issue is in HttpExceptionUtils in hadoop-common 
> (validateResponse method).
> I can provide a PR that will highlight the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19079) HttpExceptionUtils to check that loaded class is really an exception before instantiation

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836311#comment-17836311
 ] 

ASF GitHub Bot commented on HADOOP-19079:
-

steveloughran commented on PR #6557:
URL: https://github.com/apache/hadoop/pull/6557#issuecomment-2050296398

   merged to trunk; cherrypicking to branch-3.4. branch-3.3 as well, perhaps




> HttpExceptionUtils to check that loaded class is really an exception before 
> instantiation
> -
>
> Key: HADOOP-19079
> URL: https://issues.apache.org/jira/browse/HADOOP-19079
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common, security
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> It can be dangerous taking class names as inputs from HTTP messages even if 
> we control the source. Issue is in HttpExceptionUtils in hadoop-common 
> (validateResponse method).
> I can provide a PR that will highlight the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19079. Check class is an exception class before constructing an instance [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on PR #6557:
URL: https://github.com/apache/hadoop/pull/6557#issuecomment-2050296398

   merged to trunk; cherrypicking to branch-3.4. branch-3.3 as well, perhaps


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19079) HttpExceptionUtils to check that loaded class is really an exception before instantiation

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836310#comment-17836310
 ] 

ASF GitHub Bot commented on HADOOP-19079:
-

steveloughran merged PR #6557:
URL: https://github.com/apache/hadoop/pull/6557




> HttpExceptionUtils to check that loaded class is really an exception before 
> instantiation
> -
>
> Key: HADOOP-19079
> URL: https://issues.apache.org/jira/browse/HADOOP-19079
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common, security
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> It can be dangerous taking class names as inputs from HTTP messages even if 
> we control the source. Issue is in HttpExceptionUtils in hadoop-common 
> (validateResponse method).
> I can provide a PR that will highlight the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19079. Check class is an exception class before constructing an instance [hadoop]

2024-04-11 Thread via GitHub


steveloughran merged PR #6557:
URL: https://github.com/apache/hadoop/pull/6557


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19079) HttpExceptionUtils to check that loaded class is really an exception before instantiation

2024-04-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19079:

Summary: HttpExceptionUtils to check that loaded class is really an 
exception before instantiation  (was: check that class that is loaded is really 
an exception)

> HttpExceptionUtils to check that loaded class is really an exception before 
> instantiation
> -
>
> Key: HADOOP-19079
> URL: https://issues.apache.org/jira/browse/HADOOP-19079
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common, security
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> It can be dangerous taking class names as inputs from HTTP messages even if 
> we control the source. Issue is in HttpExceptionUtils in hadoop-common 
> (validateResponse method).
> I can provide a PR that will highlight the issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18679. Add API for bulk/paged object deletion [hadoop]

2024-04-11 Thread via GitHub


mukund-thakur commented on code in PR #6494:
URL: https://github.com/apache/hadoop/pull/6494#discussion_r1561315131


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/BulkDeleteOperationCallbacksImpl.java:
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.file.AccessDeniedException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import software.amazon.awssdk.services.s3.model.DeleteObjectsResponse;
+import software.amazon.awssdk.services.s3.model.ObjectIdentifier;
+import software.amazon.awssdk.services.s3.model.S3Error;
+
+import org.apache.hadoop.fs.s3a.Retries;
+import org.apache.hadoop.fs.s3a.S3AStore;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.util.functional.Tuples;
+
+import static java.util.Collections.emptyList;
+import static java.util.Collections.singletonList;
+import static org.apache.hadoop.fs.s3a.Invoker.once;
+import static org.apache.hadoop.util.Preconditions.checkArgument;
+import static org.apache.hadoop.util.functional.Tuples.pair;
+
+/**
+ * Callbacks for the bulk delete operation.
+ */
+public class BulkDeleteOperationCallbacksImpl implements
+BulkDeleteOperation.BulkDeleteOperationCallbacks {
+
+  /**
+   * Path for logging.
+   */
+  private final String path;
+
+  /** Page size for bulk delete. */
+  private final int pageSize;
+
+  /** span for operations. */
+  private final AuditSpan span;
+
+  /**
+   * Store.
+   */
+  private final S3AStore store;
+
+
+  public BulkDeleteOperationCallbacksImpl(final S3AStore store,
+  String path, int pageSize, AuditSpan span) {
+this.span = span;
+this.pageSize = pageSize;
+this.path = path;
+this.store = store;
+  }
+
+  @Override
+  @Retries.RetryTranslated
+  public List> bulkDelete(final 
List keysToDelete)
+  throws IOException, IllegalArgumentException {
+span.activate();
+final int size = keysToDelete.size();
+checkArgument(size <= pageSize,
+"Too many paths to delete in one operation: %s", size);
+if (size == 0) {
+  return emptyList();
+}
+
+if (size == 1) {
+  return deleteSingleObject(keysToDelete.get(0).key());
+}
+
+final DeleteObjectsResponse response = once("bulkDelete", path, () ->
+store.deleteObjects(store.getRequestFactory()
+.newBulkDeleteRequestBuilder(keysToDelete)
+.build())).getValue();
+final List errors = response.errors();
+if (errors.isEmpty()) {
+  // all good.
+  return emptyList();
+} else {
+  return errors.stream()
+  .map(e -> pair(e.key(), e.message()))

Review Comment:
   yes e.toString() sounds better.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18679) Add API for bulk/paged object deletion

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836273#comment-17836273
 ] 

ASF GitHub Bot commented on HADOOP-18679:
-

mukund-thakur commented on code in PR #6494:
URL: https://github.com/apache/hadoop/pull/6494#discussion_r1561315131


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/BulkDeleteOperationCallbacksImpl.java:
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.file.AccessDeniedException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import software.amazon.awssdk.services.s3.model.DeleteObjectsResponse;
+import software.amazon.awssdk.services.s3.model.ObjectIdentifier;
+import software.amazon.awssdk.services.s3.model.S3Error;
+
+import org.apache.hadoop.fs.s3a.Retries;
+import org.apache.hadoop.fs.s3a.S3AStore;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.util.functional.Tuples;
+
+import static java.util.Collections.emptyList;
+import static java.util.Collections.singletonList;
+import static org.apache.hadoop.fs.s3a.Invoker.once;
+import static org.apache.hadoop.util.Preconditions.checkArgument;
+import static org.apache.hadoop.util.functional.Tuples.pair;
+
+/**
+ * Callbacks for the bulk delete operation.
+ */
+public class BulkDeleteOperationCallbacksImpl implements
+BulkDeleteOperation.BulkDeleteOperationCallbacks {
+
+  /**
+   * Path for logging.
+   */
+  private final String path;
+
+  /** Page size for bulk delete. */
+  private final int pageSize;
+
+  /** span for operations. */
+  private final AuditSpan span;
+
+  /**
+   * Store.
+   */
+  private final S3AStore store;
+
+
+  public BulkDeleteOperationCallbacksImpl(final S3AStore store,
+  String path, int pageSize, AuditSpan span) {
+this.span = span;
+this.pageSize = pageSize;
+this.path = path;
+this.store = store;
+  }
+
+  @Override
+  @Retries.RetryTranslated
+  public List> bulkDelete(final 
List keysToDelete)
+  throws IOException, IllegalArgumentException {
+span.activate();
+final int size = keysToDelete.size();
+checkArgument(size <= pageSize,
+"Too many paths to delete in one operation: %s", size);
+if (size == 0) {
+  return emptyList();
+}
+
+if (size == 1) {
+  return deleteSingleObject(keysToDelete.get(0).key());
+}
+
+final DeleteObjectsResponse response = once("bulkDelete", path, () ->
+store.deleteObjects(store.getRequestFactory()
+.newBulkDeleteRequestBuilder(keysToDelete)
+.build())).getValue();
+final List errors = response.errors();
+if (errors.isEmpty()) {
+  // all good.
+  return emptyList();
+} else {
+  return errors.stream()
+  .map(e -> pair(e.key(), e.message()))

Review Comment:
   yes e.toString() sounds better.





> Add API for bulk/paged object deletion
> --
>
> Key: HADOOP-18679
> URL: https://issues.apache.org/jira/browse/HADOOP-18679
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> iceberg and hbase could benefit from being able to give a list of individual 
> files to delete -files which may be scattered round the bucket for better 
> read peformance. 
> Add some new optional interface for an object store which allows a caller to 
> submit a list of paths to files to delete, where
> the expectation is
> * if a path is a file: delete
> * if a path is a dir, outcome undefined
> For s3 that'd let us build these into DeleteRequest objects, and submit, 
> without any probes first.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-

[jira] [Commented] (HADOOP-18296) Memory fragmentation in ChecksumFileSystem Vectored IO implementation.

2024-04-11 Thread Mukund Thakur (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836272#comment-17836272
 ] 

Mukund Thakur commented on HADOOP-18296:


Yes, it is. Although direct buffers are not used in Orc/Parquet.  thinking if 
we should throw an Exception if the user is calling readVectored on direct 
buffers something like 

 

 
{code:java}
class ChecksumFSInputChecker {
...
...
@Override
public void readVectored(List ranges,
 IntFunction allocate) throws IOException {
  if (allocate.apply(0).isDirect()) {
throw new UnsupportedOperationException("Direct buffer is not supported");
  }
} 
}{code}
cc [~ste...@apache.org] 

 

 

> Memory fragmentation in ChecksumFileSystem Vectored IO implementation.
> --
>
> Key: HADOOP-18296
> URL: https://issues.apache.org/jira/browse/HADOOP-18296
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Priority: Minor
>  Labels: fs
>
> As we have implemented merging of ranges in the ChecksumFSInputChecker 
> implementation of vectored IO api, it can lead to memory fragmentation. Let 
> me explain by example.
>  
> Suppose client requests for 3 ranges. 
> 0-500, 700-1000 and 1200-1500.
> Now because of merging, all the above ranges will get merged into one and we 
> will allocate a big byte buffer of 0-1500 size but return sliced byte buffers 
> for the desired ranges.
> Now once the client is done reading all the ranges, it will only be able to 
> free the memory for requested ranges and memory of the gaps will never be 
> released for eg here (500-700 and 1000-1200).
>  
> Note this only happens for direct byte buffers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19140) [ABFS, S3A] Add IORateLimiter api to hadoop common

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836245#comment-17836245
 ] 

ASF GitHub Bot commented on HADOOP-19140:
-

steveloughran commented on code in PR #6703:
URL: https://github.com/apache/hadoop/pull/6703#discussion_r1561220101


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/IORateLimiterSupport.java:
##
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.impl;
+
+import org.apache.hadoop.fs.IORateLimiter;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.util.RateLimiting;
+import org.apache.hadoop.util.RateLimitingFactory;
+
+import static org.apache.hadoop.util.Preconditions.checkArgument;
+
+/**
+ * Implementation support for {@link IORateLimiter}.
+ */
+public final class IORateLimiterSupport {

Review Comment:
   with the op name and path you can be clever: 
   * limit by path
   * use operation name and have a "multiplier" of actual io, to include extra 
operations made (rename: list, copy, delete). for s3, separate read/write io 
capacities would need to be requested.
   * consider some free and give a cost of 0
   





> [ABFS, S3A] Add IORateLimiter api to hadoop common
> --
>
> Key: HADOOP-19140
> URL: https://issues.apache.org/jira/browse/HADOOP-19140
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> Create a rate limiter API in hadoop common which code (initially, manifest 
> committer, bulk delete).. can request iO capacity for a specific operation.
> this can be exported by filesystems so support shared rate limiting across 
> all threads
> pulled from HADOOP-19093 PR



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19140) [ABFS, S3A] Add IORateLimiter api to hadoop common

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836243#comment-17836243
 ] 

ASF GitHub Bot commented on HADOOP-19140:
-

steveloughran commented on code in PR #6703:
URL: https://github.com/apache/hadoop/pull/6703#discussion_r1561216836


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/IORateLimiter.java:
##
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs;
+
+import java.time.Duration;
+import javax.annotation.Nullable;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * An optional interface for classes that provide rate limiters.
+ * For a filesystem source, the operation name SHOULD be one of
+ * those listed in
+ * {@link org.apache.hadoop.fs.statistics.StoreStatisticNames}
+ * if the operation is listed there.
+ * 
+ * This interfaces is intended to be exported by FileSystems so that
+ * applications wishing to perform bulk operations may request access
+ * to a rate limiter which is shared across all threads interacting
+ * with the store..
+ * That is: the rate limiting is global to the specific instance of the
+ * object implementing this interface.
+ * 
+ * It is not expected to be shared with other instances of the same
+ * class, or across processes.
+ * 
+ * This means it is primarily of benefit when limiting bulk operations
+ * which can overload an (object) store from a small pool of threads.
+ * Examples of this can include:
+ * 
+ *   Bulk delete operations
+ *   Bulk rename operations
+ *   Completing many in-progress uploads
+ *   Deep and wide recursive treewalks
+ *   Reading/prefetching many blocks within a file
+ * 
+ * In cluster applications, it is more likely that rate limiting is
+ * useful during job commit operations, or processes with many threads.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+public interface IORateLimiter {
+
+  /**
+   * Acquire IO capacity.
+   * 
+   * The implementation may assign different costs to the different
+   * operations.
+   * 
+   * If there is not enough space, the permits will be acquired,
+   * but the subsequent call will block until the capacity has been
+   * refilled.
+   * 
+   * The path parameter is used to support stores where there may be different 
throttling
+   * under different paths.
+   * @param operation operation being performed. Must not be null, may be "",
+   * should be from {@link org.apache.hadoop.fs.statistics.StoreStatisticNames}
+   * where there is a matching operation.
+   * @param source path for operations.
+   * Use "/" for root/store-wide operations.
+   * @param dest destination path for rename operations or any other operation 
which
+   * takes two paths.
+   * @param requestedCapacity capacity to acquire.
+   * Must be greater than or equal to 0.
+   * @return time spent waiting for output.
+   */
+  Duration acquireIOCapacity(
+  String operation,
+  Path source,

Review Comment:
   s3 throttling does as it is per prefix. 





> [ABFS, S3A] Add IORateLimiter api to hadoop common
> --
>
> Key: HADOOP-19140
> URL: https://issues.apache.org/jira/browse/HADOOP-19140
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> Create a rate limiter API in hadoop common which code (initially, manifest 
> committer, bulk delete).. can request iO capacity for a specific operation.
> this can be exported by filesystems so support shared rate limiting across 
> all threads
> pulled from HADOOP-19093 PR



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apach

Re: [PR] HADOOP-19140. [ABFS, S3A] Add IORateLimiter API [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on code in PR #6703:
URL: https://github.com/apache/hadoop/pull/6703#discussion_r1561220101


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/IORateLimiterSupport.java:
##
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.impl;
+
+import org.apache.hadoop.fs.IORateLimiter;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.util.RateLimiting;
+import org.apache.hadoop.util.RateLimitingFactory;
+
+import static org.apache.hadoop.util.Preconditions.checkArgument;
+
+/**
+ * Implementation support for {@link IORateLimiter}.
+ */
+public final class IORateLimiterSupport {

Review Comment:
   with the op name and path you can be clever: 
   * limit by path
   * use operation name and have a "multiplier" of actual io, to include extra 
operations made (rename: list, copy, delete). for s3, separate read/write io 
capacities would need to be requested.
   * consider some free and give a cost of 0
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19140. [ABFS, S3A] Add IORateLimiter API [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on code in PR #6703:
URL: https://github.com/apache/hadoop/pull/6703#discussion_r1561216836


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/IORateLimiter.java:
##
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs;
+
+import java.time.Duration;
+import javax.annotation.Nullable;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * An optional interface for classes that provide rate limiters.
+ * For a filesystem source, the operation name SHOULD be one of
+ * those listed in
+ * {@link org.apache.hadoop.fs.statistics.StoreStatisticNames}
+ * if the operation is listed there.
+ * 
+ * This interfaces is intended to be exported by FileSystems so that
+ * applications wishing to perform bulk operations may request access
+ * to a rate limiter which is shared across all threads interacting
+ * with the store..
+ * That is: the rate limiting is global to the specific instance of the
+ * object implementing this interface.
+ * 
+ * It is not expected to be shared with other instances of the same
+ * class, or across processes.
+ * 
+ * This means it is primarily of benefit when limiting bulk operations
+ * which can overload an (object) store from a small pool of threads.
+ * Examples of this can include:
+ * 
+ *   Bulk delete operations
+ *   Bulk rename operations
+ *   Completing many in-progress uploads
+ *   Deep and wide recursive treewalks
+ *   Reading/prefetching many blocks within a file
+ * 
+ * In cluster applications, it is more likely that rate limiting is
+ * useful during job commit operations, or processes with many threads.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+public interface IORateLimiter {
+
+  /**
+   * Acquire IO capacity.
+   * 
+   * The implementation may assign different costs to the different
+   * operations.
+   * 
+   * If there is not enough space, the permits will be acquired,
+   * but the subsequent call will block until the capacity has been
+   * refilled.
+   * 
+   * The path parameter is used to support stores where there may be different 
throttling
+   * under different paths.
+   * @param operation operation being performed. Must not be null, may be "",
+   * should be from {@link org.apache.hadoop.fs.statistics.StoreStatisticNames}
+   * where there is a matching operation.
+   * @param source path for operations.
+   * Use "/" for root/store-wide operations.
+   * @param dest destination path for rename operations or any other operation 
which
+   * takes two paths.
+   * @param requestedCapacity capacity to acquire.
+   * Must be greater than or equal to 0.
+   * @return time spent waiting for output.
+   */
+  Duration acquireIOCapacity(
+  String operation,
+  Path source,

Review Comment:
   s3 throttling does as it is per prefix. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18679) Add API for bulk/paged object deletion

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836215#comment-17836215
 ] 

ASF GitHub Bot commented on HADOOP-18679:
-

steveloughran commented on code in PR #6494:
URL: https://github.com/apache/hadoop/pull/6494#discussion_r1561126867


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/BulkDeleteOperationCallbacksImpl.java:
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.file.AccessDeniedException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import software.amazon.awssdk.services.s3.model.DeleteObjectsResponse;
+import software.amazon.awssdk.services.s3.model.ObjectIdentifier;
+import software.amazon.awssdk.services.s3.model.S3Error;
+
+import org.apache.hadoop.fs.s3a.Retries;
+import org.apache.hadoop.fs.s3a.S3AStore;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.util.functional.Tuples;
+
+import static java.util.Collections.emptyList;
+import static java.util.Collections.singletonList;
+import static org.apache.hadoop.fs.s3a.Invoker.once;
+import static org.apache.hadoop.util.Preconditions.checkArgument;
+import static org.apache.hadoop.util.functional.Tuples.pair;
+
+/**
+ * Callbacks for the bulk delete operation.
+ */
+public class BulkDeleteOperationCallbacksImpl implements
+BulkDeleteOperation.BulkDeleteOperationCallbacks {
+
+  /**
+   * Path for logging.
+   */
+  private final String path;
+
+  /** Page size for bulk delete. */
+  private final int pageSize;
+
+  /** span for operations. */
+  private final AuditSpan span;
+
+  /**
+   * Store.
+   */
+  private final S3AStore store;
+
+
+  public BulkDeleteOperationCallbacksImpl(final S3AStore store,
+  String path, int pageSize, AuditSpan span) {
+this.span = span;
+this.pageSize = pageSize;
+this.path = path;
+this.store = store;
+  }
+
+  @Override
+  @Retries.RetryTranslated
+  public List> bulkDelete(final 
List keysToDelete)
+  throws IOException, IllegalArgumentException {
+span.activate();
+final int size = keysToDelete.size();
+checkArgument(size <= pageSize,
+"Too many paths to delete in one operation: %s", size);
+if (size == 0) {
+  return emptyList();
+}
+
+if (size == 1) {
+  return deleteSingleObject(keysToDelete.get(0).key());
+}
+
+final DeleteObjectsResponse response = once("bulkDelete", path, () ->
+store.deleteObjects(store.getRequestFactory()
+.newBulkDeleteRequestBuilder(keysToDelete)
+.build())).getValue();
+final List errors = response.errors();
+if (errors.isEmpty()) {
+  // all good.
+  return emptyList();
+} else {
+  return errors.stream()
+  .map(e -> pair(e.key(), e.message()))

Review Comment:
   or e.toString()?





> Add API for bulk/paged object deletion
> --
>
> Key: HADOOP-18679
> URL: https://issues.apache.org/jira/browse/HADOOP-18679
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> iceberg and hbase could benefit from being able to give a list of individual 
> files to delete -files which may be scattered round the bucket for better 
> read peformance. 
> Add some new optional interface for an object store which allows a caller to 
> submit a list of paths to files to delete, where
> the expectation is
> * if a path is a file: delete
> * if a path is a dir, outcome undefined
> For s3 that'd let us build these into DeleteRequest objects, and submit, 
> without any probes first.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe

Re: [PR] HADOOP-18679. Add API for bulk/paged object deletion [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on code in PR #6494:
URL: https://github.com/apache/hadoop/pull/6494#discussion_r1561126867


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/BulkDeleteOperationCallbacksImpl.java:
##
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.nio.file.AccessDeniedException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import software.amazon.awssdk.services.s3.model.DeleteObjectsResponse;
+import software.amazon.awssdk.services.s3.model.ObjectIdentifier;
+import software.amazon.awssdk.services.s3.model.S3Error;
+
+import org.apache.hadoop.fs.s3a.Retries;
+import org.apache.hadoop.fs.s3a.S3AStore;
+import org.apache.hadoop.fs.store.audit.AuditSpan;
+import org.apache.hadoop.util.functional.Tuples;
+
+import static java.util.Collections.emptyList;
+import static java.util.Collections.singletonList;
+import static org.apache.hadoop.fs.s3a.Invoker.once;
+import static org.apache.hadoop.util.Preconditions.checkArgument;
+import static org.apache.hadoop.util.functional.Tuples.pair;
+
+/**
+ * Callbacks for the bulk delete operation.
+ */
+public class BulkDeleteOperationCallbacksImpl implements
+BulkDeleteOperation.BulkDeleteOperationCallbacks {
+
+  /**
+   * Path for logging.
+   */
+  private final String path;
+
+  /** Page size for bulk delete. */
+  private final int pageSize;
+
+  /** span for operations. */
+  private final AuditSpan span;
+
+  /**
+   * Store.
+   */
+  private final S3AStore store;
+
+
+  public BulkDeleteOperationCallbacksImpl(final S3AStore store,
+  String path, int pageSize, AuditSpan span) {
+this.span = span;
+this.pageSize = pageSize;
+this.path = path;
+this.store = store;
+  }
+
+  @Override
+  @Retries.RetryTranslated
+  public List> bulkDelete(final 
List keysToDelete)
+  throws IOException, IllegalArgumentException {
+span.activate();
+final int size = keysToDelete.size();
+checkArgument(size <= pageSize,
+"Too many paths to delete in one operation: %s", size);
+if (size == 0) {
+  return emptyList();
+}
+
+if (size == 1) {
+  return deleteSingleObject(keysToDelete.get(0).key());
+}
+
+final DeleteObjectsResponse response = once("bulkDelete", path, () ->
+store.deleteObjects(store.getRequestFactory()
+.newBulkDeleteRequestBuilder(keysToDelete)
+.build())).getValue();
+final List errors = response.errors();
+if (errors.isEmpty()) {
+  // all good.
+  return emptyList();
+} else {
+  return errors.stream()
+  .map(e -> pair(e.key(), e.message()))

Review Comment:
   or e.toString()?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16822) Provide source artifacts for hadoop-client-api

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836212#comment-17836212
 ] 

ASF GitHub Bot commented on HADOOP-16822:
-

steveloughran commented on PR #6719:
URL: https://github.com/apache/hadoop/pull/6719#issuecomment-2049829358

   thanks, merged




> Provide source artifacts for hadoop-client-api
> --
>
> Key: HADOOP-16822
> URL: https://issues.apache.org/jira/browse/HADOOP-16822
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.1, 3.4.0, 3.2.3
>Reporter: Karel Kolman
>Assignee: Karel Kolman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
> Attachments: HADOOP-16822-hadoop-client-api-source-jar.patch
>
>
> h5. Improvement request
> The third-party libraries shading hadoop-client-api (& hadoop-client-runtime) 
> artifacts are super useful.
>  
> Having uber source jar for hadoop-client-api (maybe even 
> hadoop-client-runtime) would be great for downstream development & debugging 
> purposes.
> Are there any obstacles or objections against providing fat jar with all the 
> hadoop client api as well ?
> h5. Dev links
> - *maven-shaded-plugin* and its *shadeSourcesContent* attribute
> - 
> https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#shadeSourcesContent
> h2. Update April 2024: this has been reverted.
> It turns out that it complicates debugging. If you want the source when 
> debugging, the best way is just to check out the hadoop release you are 
> working with and point your IDE at it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] (3.3) Revert "HADOOP-16822. Provide source artifacts for hadoop-client-api" [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on PR #6719:
URL: https://github.com/apache/hadoop/pull/6719#issuecomment-2049829358

   thanks, merged


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] (3.3) Revert "HADOOP-16822. Provide source artifacts for hadoop-client-api" [hadoop]

2024-04-11 Thread via GitHub


steveloughran merged PR #6719:
URL: https://github.com/apache/hadoop/pull/6719


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16822) Provide source artifacts for hadoop-client-api

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836211#comment-17836211
 ] 

ASF GitHub Bot commented on HADOOP-16822:
-

steveloughran merged PR #6719:
URL: https://github.com/apache/hadoop/pull/6719




> Provide source artifacts for hadoop-client-api
> --
>
> Key: HADOOP-16822
> URL: https://issues.apache.org/jira/browse/HADOOP-16822
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.1, 3.4.0, 3.2.3
>Reporter: Karel Kolman
>Assignee: Karel Kolman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
> Attachments: HADOOP-16822-hadoop-client-api-source-jar.patch
>
>
> h5. Improvement request
> The third-party libraries shading hadoop-client-api (& hadoop-client-runtime) 
> artifacts are super useful.
>  
> Having uber source jar for hadoop-client-api (maybe even 
> hadoop-client-runtime) would be great for downstream development & debugging 
> purposes.
> Are there any obstacles or objections against providing fat jar with all the 
> hadoop client api as well ?
> h5. Dev links
> - *maven-shaded-plugin* and its *shadeSourcesContent* attribute
> - 
> https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#shadeSourcesContent
> h2. Update April 2024: this has been reverted.
> It turns out that it complicates debugging. If you want the source when 
> debugging, the best way is just to check out the hadoop release you are 
> working with and point your IDE at it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17461. Fix spotbugs in PeerCache#getInternal [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6721:
URL: https://github.com/apache/hadoop/pull/6721#issuecomment-2049777273

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   2m 50s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6721/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  39m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  
hadoop-hdfs-project/hadoop-hdfs-client generated 0 new + 0 unchanged - 1 fixed 
= 0 total (was 1)  |
   | +1 :green_heart: |  shadedclient  |  37m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6721/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6721 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 33e5234aa821 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4116fa0bb6144c988fc8a5291d16d01107e42121 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6721/1/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/ha

Re: [PR] HDFS-17458. Remove unnecessary BP lock in ReplicaMap. [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6717:
URL: https://github.com/apache/hadoop/pull/6717#issuecomment-2049744473

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  42m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 265m 27s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 442m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6717 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e0f27f718e00 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4acc5b4369d4e0528645386df1720ee3bb8cced3 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/4/testReport/ |
   | Max. process+thread count | 2635 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[PR] HDFS-17462. Fix NPE in Router concat when trg is an empty ile [hadoop]

2024-04-11 Thread via GitHub


fannaihao opened a new pull request, #6722:
URL: https://github.com/apache/hadoop/pull/6722

   
   
   ### Description of PR
   When trg of Router concat is an empty file, it will trigger NPE in Router, 
and the concat will fail, example:
   
![image](https://github.com/apache/hadoop/assets/40593494/4edc0aed-08ee-4e1d-8236-84c20f61d15d)
   
   This is because when trg is an empty file, NameNode will return 
lastLocatedBlock as null in the response of getBlockLocations. And Router will 
not check null of lastLocatedBlock returned, instead Router will use it to get 
block pool id directly.
   Trg of concat is an empty file should be allowed in router since this case 
is supported by concat of NameNode.
   This PR fix this NPE exception.
   
   ### How was this patch tested?
   
![image](https://github.com/apache/hadoop/assets/40593494/23a46672-cd3a-4a54-8f4d-9c833b2d560c)
   
   
   ### For code changes:
   If lastLocatedBlock returned from getBlockLocations is null in Router 
concat, it will not be used to get block pool id.
   In this case, the block pool id check of trg will be delayed, i.e., concat 
continues to get and check block pool id of files in src, and only check them.
   And the check of trg block pool id can be achieved in following steps, i.e., 
getLocationForPath and the request of concat forwarded to NameNode.
   And exceptions will be thrown if block pool id of trg is not match with the 
block pool id of any file in src.
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17458. Remove unnecessary BP lock in ReplicaMap. [hadoop]

2024-04-11 Thread via GitHub


hfutatzhanghb commented on PR #6717:
URL: https://github.com/apache/hadoop/pull/6717#issuecomment-2049700613

   > @hfutatzhanghb Thanks for your works. We should be careful to remove BP 
lock here. List one of the changes as example, it will return one definite 
value before this PR because hold RW lock here, but uncertain after this PR, 
such as another thread invoke `map.put` between `map.get` and `return` it will 
return null, but if invoke `map.put` before them it will return one 
`ReplicaInfo` object.
   > 
   > ```
   >   ReplicaInfo get(String bpid, long blockId) {
   > checkBlockPool(bpid);
   > - try (AutoCloseDataSetLock l = 
lockManager.readLock(LockLevel.BLOCK_POOl, bpid)) {
   > -   LightWeightResizableGSet m = map.get(bpid);
   > -   return m != null ? m.get(new Block(blockId)) : null;
   > - }
   > + LightWeightResizableGSet m = map.get(bpid);
   > + return m != null ? m.get(new Block(blockId)) : null;
   >   }
   > ```
   > 
   > I didn't traverse all invoker here, and not sure if it will involve some 
potential risk. FYI.
   
   Sir, Thanks for your replying. Yes, we need to be very careful to modify 
class ReplicaMap. In fact, i have check the methods one by one and  I think we 
can push this PR forward after it runs stablely on our product for a long time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836174#comment-17836174
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

steveloughran commented on code in PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#discussion_r1561011696


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java:
##
@@ -102,11 +103,31 @@ public void testListPathWithValidListMaxResultsValues()
   setListMaxResults(listMaxResults);
   int expectedListResultsSize =
   listMaxResults > fileCount ? fileCount : listMaxResults;
-  Assertions.assertThat(listPath(directory.toString())).describedAs(
-  "AbfsClient.listPath result should contain %d items when "
-  + "listMaxResults is %d and directory contains %d items",
-  expectedListResultsSize, listMaxResults, fileCount)
-  .hasSize(expectedListResultsSize);
+
+  AbfsRestOperation op = getFileSystem().getAbfsClient().listPath(
+  directory.toString(), false, getListMaxResults(), null,
+  getTestTracingContext(getFileSystem(), true));
+
+  List list = 
op.getResult().getListResultSchema().paths();
+  String continuationToken = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
+
+  if (continuationToken == null) {
+// Listing is complete and number of objects should be same as expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " when listMaxResults is %d, directory contains %d items and 
"
++ "listing is complete",
+expectedListResultsSize, listMaxResults, fileCount)
+.hasSize(expectedListResultsSize);
+  } else {
+// Listing is incomplete and number of objects can be lesser than 
expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " or lesser when listMaxResults is %d,  directory contains"

Review Comment:
   nit: "less"



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java:
##
@@ -102,11 +103,31 @@ public void testListPathWithValidListMaxResultsValues()
   setListMaxResults(listMaxResults);
   int expectedListResultsSize =
   listMaxResults > fileCount ? fileCount : listMaxResults;
-  Assertions.assertThat(listPath(directory.toString())).describedAs(
-  "AbfsClient.listPath result should contain %d items when "
-  + "listMaxResults is %d and directory contains %d items",
-  expectedListResultsSize, listMaxResults, fileCount)
-  .hasSize(expectedListResultsSize);
+
+  AbfsRestOperation op = getFileSystem().getAbfsClient().listPath(
+  directory.toString(), false, getListMaxResults(), null,
+  getTestTracingContext(getFileSystem(), true));
+
+  List list = 
op.getResult().getListResultSchema().paths();
+  String continuationToken = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
+
+  if (continuationToken == null) {
+// Listing is complete and number of objects should be same as expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " when listMaxResults is %d, directory contains %d items and 
"
++ "listing is complete",
+expectedListResultsSize, listMaxResults, fileCount)
+.hasSize(expectedListResultsSize);
+  } else {
+// Listing is incomplete and number of objects can be lesser than 
expected

Review Comment:
   nit: "less"



##
hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md:
##
@@ -634,6 +631,8 @@ kept out of the source tree then referenced through an 
XInclude element:
New files created in folder accountSettings is listed in .gitignore to
prevent accidental cred leaks.
 
+You are all set to run the test srcipt.

Review Comment:
   typo



##
hadoop-tools/hadoop-azure/src/site/markdown/index.md:
##
@@ -18,8 +18,8 @@
 
 See also:
 
-* [ABFS](./abfs.html)
-* [Testing](./testing_azure.html)
+* [ABFS](./abfs.md)

Review Comment:
   -1. when site docs are generated *.md is mapped to *.html.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java:
##
@@ -532,4 +528,36 @@ protected long assertAbfsStatistics(AbfsStatistic 
statistic,
 (long) metricMap.get(statistic.getStatName()));
 return expectedValue;
   }
+
+  protected void assumeValidTestConfigPresent(final Configuration conf, final 
String key) {
+String configuredValue = conf.get(key);

Review Comment:
   tip: if you set the default t

Re: [PR] HADOOP-19129: [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on code in PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#discussion_r1561011696


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java:
##
@@ -102,11 +103,31 @@ public void testListPathWithValidListMaxResultsValues()
   setListMaxResults(listMaxResults);
   int expectedListResultsSize =
   listMaxResults > fileCount ? fileCount : listMaxResults;
-  Assertions.assertThat(listPath(directory.toString())).describedAs(
-  "AbfsClient.listPath result should contain %d items when "
-  + "listMaxResults is %d and directory contains %d items",
-  expectedListResultsSize, listMaxResults, fileCount)
-  .hasSize(expectedListResultsSize);
+
+  AbfsRestOperation op = getFileSystem().getAbfsClient().listPath(
+  directory.toString(), false, getListMaxResults(), null,
+  getTestTracingContext(getFileSystem(), true));
+
+  List list = 
op.getResult().getListResultSchema().paths();
+  String continuationToken = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
+
+  if (continuationToken == null) {
+// Listing is complete and number of objects should be same as expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " when listMaxResults is %d, directory contains %d items and 
"
++ "listing is complete",
+expectedListResultsSize, listMaxResults, fileCount)
+.hasSize(expectedListResultsSize);
+  } else {
+// Listing is incomplete and number of objects can be lesser than 
expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " or lesser when listMaxResults is %d,  directory contains"

Review Comment:
   nit: "less"



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsClient.java:
##
@@ -102,11 +103,31 @@ public void testListPathWithValidListMaxResultsValues()
   setListMaxResults(listMaxResults);
   int expectedListResultsSize =
   listMaxResults > fileCount ? fileCount : listMaxResults;
-  Assertions.assertThat(listPath(directory.toString())).describedAs(
-  "AbfsClient.listPath result should contain %d items when "
-  + "listMaxResults is %d and directory contains %d items",
-  expectedListResultsSize, listMaxResults, fileCount)
-  .hasSize(expectedListResultsSize);
+
+  AbfsRestOperation op = getFileSystem().getAbfsClient().listPath(
+  directory.toString(), false, getListMaxResults(), null,
+  getTestTracingContext(getFileSystem(), true));
+
+  List list = 
op.getResult().getListResultSchema().paths();
+  String continuationToken = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
+
+  if (continuationToken == null) {
+// Listing is complete and number of objects should be same as expected
+Assertions.assertThat(list)
+.describedAs("AbfsClient.listPath() should return %d items"
++ " when listMaxResults is %d, directory contains %d items and 
"
++ "listing is complete",
+expectedListResultsSize, listMaxResults, fileCount)
+.hasSize(expectedListResultsSize);
+  } else {
+// Listing is incomplete and number of objects can be lesser than 
expected

Review Comment:
   nit: "less"



##
hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md:
##
@@ -634,6 +631,8 @@ kept out of the source tree then referenced through an 
XInclude element:
New files created in folder accountSettings is listed in .gitignore to
prevent accidental cred leaks.
 
+You are all set to run the test srcipt.

Review Comment:
   typo



##
hadoop-tools/hadoop-azure/src/site/markdown/index.md:
##
@@ -18,8 +18,8 @@
 
 See also:
 
-* [ABFS](./abfs.html)
-* [Testing](./testing_azure.html)
+* [ABFS](./abfs.md)

Review Comment:
   -1. when site docs are generated *.md is mapped to *.html.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java:
##
@@ -532,4 +528,36 @@ protected long assertAbfsStatistics(AbfsStatistic 
statistic,
 (long) metricMap.get(statistic.getStatName()));
 return expectedValue;
   }
+
+  protected void assumeValidTestConfigPresent(final Configuration conf, final 
String key) {
+String configuredValue = conf.get(key);

Review Comment:
   tip: if you set the default to "" then the line below can just check for the 
string not being empty



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-m

[jira] [Resolved] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-04-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-19096.
-
Fix Version/s: 3.5.0
   Resolution: Fixed

> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.1
>
>
> ABFS has a client-side throttling mechanism which works on the metrics 
> collected from past requests made. I requests are getting failed due to 
> throttling at server, we update our metrics and client side backoff is 
> calculated based on those metrics.
> This PR enhances the logic to decide which requests should be considered to 
> compute client side backoff interval as follows:
> For each request made by ABFS driver, we will determine if they should 
> contribute to Client-Side Throttling based on the status code and result:
>  # Status code in 2xx range: Successful Operations should contribute.
>  # Status code in 3xx range: Redirection Operations should not contribute.
>  # Status code in 4xx range: User Errors should not contribute.
>  # Status code is 503: Throttling Error should contribute only if they are 
> due to client limits breach as follows:
>  ## 503, Ingress Over Account Limit: Should Contribute
>  ## 503, Egress Over Account Limit: Should Contribute
>  ## 503, TPS Over Account Limit: Should Contribute
>  ## 503, Other Server Throttling: Should not Contribute.
>  # Status code in 5xx range other than 503: Should not Contribute.
>  # IOException and UnknownHostExceptions: Should not Contribute.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836168#comment-17836168
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

steveloughran merged PR #6720:
URL: https://github.com/apache/hadoop/pull/6720




> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> ABFS has a client-side throttling mechanism which works on the metrics 
> collected from past requests made. I requests are getting failed due to 
> throttling at server, we update our metrics and client side backoff is 
> calculated based on those metrics.
> This PR enhances the logic to decide which requests should be considered to 
> compute client side backoff interval as follows:
> For each request made by ABFS driver, we will determine if they should 
> contribute to Client-Side Throttling based on the status code and result:
>  # Status code in 2xx range: Successful Operations should contribute.
>  # Status code in 3xx range: Redirection Operations should not contribute.
>  # Status code in 4xx range: User Errors should not contribute.
>  # Status code is 503: Throttling Error should contribute only if they are 
> due to client limits breach as follows:
>  ## 503, Ingress Over Account Limit: Should Contribute
>  ## 503, Egress Over Account Limit: Should Contribute
>  ## 503, TPS Over Account Limit: Should Contribute
>  ## 503, Other Server Throttling: Should not Contribute.
>  # Status code in 5xx range other than 503: Should not Contribute.
>  # IOException and UnknownHostExceptions: Should not Contribute.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836169#comment-17836169
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

steveloughran commented on PR #6720:
URL: https://github.com/apache/hadoop/pull/6720#issuecomment-2049664824

   +1, merged to 3.4.




> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> ABFS has a client-side throttling mechanism which works on the metrics 
> collected from past requests made. I requests are getting failed due to 
> throttling at server, we update our metrics and client side backoff is 
> calculated based on those metrics.
> This PR enhances the logic to decide which requests should be considered to 
> compute client side backoff interval as follows:
> For each request made by ABFS driver, we will determine if they should 
> contribute to Client-Side Throttling based on the status code and result:
>  # Status code in 2xx range: Successful Operations should contribute.
>  # Status code in 3xx range: Redirection Operations should not contribute.
>  # Status code in 4xx range: User Errors should not contribute.
>  # Status code is 503: Throttling Error should contribute only if they are 
> due to client limits breach as follows:
>  ## 503, Ingress Over Account Limit: Should Contribute
>  ## 503, Egress Over Account Limit: Should Contribute
>  ## 503, TPS Over Account Limit: Should Contribute
>  ## 503, Other Server Throttling: Should not Contribute.
>  # Status code in 5xx range other than 503: Should not Contribute.
>  # IOException and UnknownHostExceptions: Should not Contribute.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19096: [Backport to 3.4][ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-04-11 Thread via GitHub


steveloughran commented on PR #6720:
URL: https://github.com/apache/hadoop/pull/6720#issuecomment-2049664824

   +1, merged to 3.4.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19096: [Backport to 3.4][ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-04-11 Thread via GitHub


steveloughran merged PR #6720:
URL: https://github.com/apache/hadoop/pull/6720


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19105) S3A: Recover from Vector IO read failures

2024-04-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19105:

Environment: 
s3a vector IO doesn't try to recover from read failures the way read() does.

Need to
* abort HTTP stream if considered needed
* retry active read which failed
* but not those which had succeeded

On a full failure we need to do something about any allocated buffer, which 
means we really need the buffer pool {{ByteBufferPool}} to return or also 
provide a "release" (Bytebuffer -> void) call which does the return.  we would 
need to
* add this as a new api with the implementations in s3a, local, rawlocal
* classic single allocator method remaps to the new one with (() -> null) as 
the response

This keeps the public API stable



  was:
s3a vector IO doesn't try to recover from read failures the way read() does.

Need to
* abort HTTP stream if considered needed
* retry active read which failed
* but not those which had succeeded




> S3A: Recover from Vector IO read failures
> -
>
> Key: HADOOP-19105
> URL: https://issues.apache.org/jira/browse/HADOOP-19105
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0, 3.3.6
> Environment: s3a vector IO doesn't try to recover from read failures 
> the way read() does.
> Need to
> * abort HTTP stream if considered needed
> * retry active read which failed
> * but not those which had succeeded
> On a full failure we need to do something about any allocated buffer, which 
> means we really need the buffer pool {{ByteBufferPool}} to return or also 
> provide a "release" (Bytebuffer -> void) call which does the return.  we 
> would need to
> * add this as a new api with the implementations in s3a, local, rawlocal
> * classic single allocator method remaps to the new one with (() -> null) as 
> the response
> This keeps the public API stable
>Reporter: Steve Loughran
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836145#comment-17836145
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

hadoop-yetus commented on PR #6720:
URL: https://github.com/apache/hadoop/pull/6720#issuecomment-2049551471

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ branch-3.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 18s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  19m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 59s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  83m 17s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6720/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6720 |
   | JIRA Issue | HADOOP-19096 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 078e88589ba9 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / e2d5562134e3d5eadbc83952359dddfc321a8c92 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6720/1/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6720/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This

Re: [PR] HADOOP-19096: [Backport to 3.4][ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6720:
URL: https://github.com/apache/hadoop/pull/6720#issuecomment-2049551471

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ branch-3.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 18s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  19m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 59s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  83m 17s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6720/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6720 |
   | JIRA Issue | HADOOP-19096 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 078e88589ba9 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / e2d5562134e3d5eadbc83952359dddfc321a8c92 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6720/1/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6720/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr

[PR] HDFS-17461. Fix spotbugs in PeerCache#getInternal [hadoop]

2024-04-11 Thread via GitHub


haiyang1987 opened a new pull request, #6721:
URL: https://github.com/apache/hadoop/pull/6721

   ### Description of PR
   https://issues.apache.org/jira/browse/HDFS-17461
   
   Fix spotbugs in PeerCache#getInternal
   
   Spotbugs warnings:
   
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
   
   ` private final LinkedListMultimap multimap = 
LinkedListMultimap.create();`
   Returns a collection view containing the values associated with key in this 
multimap, if any. Note that even when (containsKey(key) is false, get(key) 
still returns an empty collection, not null.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17455. Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt [hadoop]

2024-04-11 Thread via GitHub


haiyang1987 commented on PR #6710:
URL: https://github.com/apache/hadoop/pull/6710#issuecomment-2049510136

   Thanks @ZanderXu @Hexiaoqiao for your review and merge it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836129#comment-17836129
 ] 

ASF GitHub Bot commented on HADOOP-19129:
-

anujmodi2021 commented on code in PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#discussion_r1560869308


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java:
##
@@ -532,4 +528,36 @@ protected long assertAbfsStatistics(AbfsStatistic 
statistic,
 (long) metricMap.get(statistic.getStatName()));
 return expectedValue;
   }
+
+  protected void assumeValidTestConfigPresent(final Configuration conf, final 
String key) {
+String configuredValue = conf.get(key);

Review Comment:
   Makes sense.
   Taken



##
hadoop-tools/hadoop-azure/dev-support/testrun-scripts/testsupport.sh:
##
@@ -21,8 +21,13 @@ combtestfile=$resourceDir
 combtestfile+=abfs-combination-test-configs.xml
 logdir=dev-support/testlogs/
 
-testresultsregex="Results:(\n|.)*?Tests run:"
+testresultsregex="Tests run: [0-9]+, Failures: [0-9]+, Errors: [0-9]+, 
Skipped: [0-9]+$"
+failedTestRegex1="<<< FAILURE!$"
+failedTestRegex2="<<< ERROR!$"
+removeFormattingRegex="s/\x1b\[[0-9;]*m//g"

Review Comment:
   Sure, Taken





> ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite
> 
>
> Key: HADOOP-19129
> URL: https://issues.apache.org/jira/browse/HADOOP-19129
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Test Script used by ABFS to validate changes has following two issues:
>  # When there are a lot of test failures or when error message of any failing 
> test becomes very large, the regex used today to filter test results does not 
> work as expected and fails to report all the failing tests.
> To resolve this, we have come up with new regex that will only target one 
> line test names for reporting them into aggregated test results.
>  # While running the test suite for different combinations of Auth type and 
> account type, we add the combination specific configs first and then include 
> the account specific configs in core-site.xml file. This will override the 
> combination specific configs like auth type if the same config is present in 
> account specific config file. To avoid this, we will first include the 
> account specific configs and then add the combination specific configs.
> Due to above bug in test script, some test failures in ABFS were not getting 
> our attention. This PR also targets to resolve them. Following are the tests 
> fixed:
>  # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete(): It 
> was failing only when append blobs were enabled. In case of append blobs we 
> were not closing the active block on outputstrea,close() due to which 
> block.close() was not getting called and assertions around it were failing. 
> Fixed by updating the production code to close the active block on flush.
>  # ITestAzureBlobFileSystemAuthorization: Tests in this class works with an 
> existing remote filesystem instead of creating a new file system instance. 
> For this they require file system configured in account settings using 
> following config: "fs.contract.test.fs.abfs". Tests weref ailing with NPE 
> when this config was not present. Updated code to skip thsi test if required 
> config is not present.
>  # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): Test was 
> failing Intermittently only for HNS enabled accounts. Test wants to assert 
> that client.listPath() does not return more objects than what is configured 
> in maxListResults. Assertions should be that number of objects returned could 
> be less than expected as server might end up returning even lesser due to 
> partition splits along with a continuation token.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue(): Fail 
> when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsFalse(): 
> Fail when "fs.azure.test.namespace.enabled" config is missing. Ignore the 
> test if config is missing.
>  # ITestGetNameSpaceEnabled.testNonXNSAccount(): Fail when 
> "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
> config is missing.
>  # ITestAbfsStreamStatistics.testAbfsStreamOps: Fails when 
> "fs.azure.test.appendblob.enabled" is set to true. Test wanted to assert that 
> number of read operations can be more in case of append blobs as compared to 
> normal blob

Re: [PR] HADOOP-19129: [ABFS] Test Fixes and Test Script Bug Fixes [hadoop]

2024-04-11 Thread via GitHub


anujmodi2021 commented on code in PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#discussion_r1560869308


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java:
##
@@ -532,4 +528,36 @@ protected long assertAbfsStatistics(AbfsStatistic 
statistic,
 (long) metricMap.get(statistic.getStatName()));
 return expectedValue;
   }
+
+  protected void assumeValidTestConfigPresent(final Configuration conf, final 
String key) {
+String configuredValue = conf.get(key);

Review Comment:
   Makes sense.
   Taken



##
hadoop-tools/hadoop-azure/dev-support/testrun-scripts/testsupport.sh:
##
@@ -21,8 +21,13 @@ combtestfile=$resourceDir
 combtestfile+=abfs-combination-test-configs.xml
 logdir=dev-support/testlogs/
 
-testresultsregex="Results:(\n|.)*?Tests run:"
+testresultsregex="Tests run: [0-9]+, Failures: [0-9]+, Errors: [0-9]+, 
Skipped: [0-9]+$"
+failedTestRegex1="<<< FAILURE!$"
+failedTestRegex2="<<< ERROR!$"
+removeFormattingRegex="s/\x1b\[[0-9;]*m//g"

Review Comment:
   Sure, Taken



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17458. Remove unnecessary BP lock in ReplicaMap. [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6717:
URL: https://github.com/apache/hadoop/pull/6717#issuecomment-2049462537

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 197m  0s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 292m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6717 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 159c60de77f8 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4acc5b4369d4e0528645386df1720ee3bb8cced3 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/3/testReport/ |
   | Max. process+thread count | 4388 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836110#comment-17836110
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

anujmodi2021 commented on PR #6720:
URL: https://github.com/apache/hadoop/pull/6720#issuecomment-2049412232

   @steveloughran 
   This is good to merge.
   The test failures here are known and fixed in PR: 
https://github.com/apache/hadoop/pull/6676




> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> ABFS has a client-side throttling mechanism which works on the metrics 
> collected from past requests made. I requests are getting failed due to 
> throttling at server, we update our metrics and client side backoff is 
> calculated based on those metrics.
> This PR enhances the logic to decide which requests should be considered to 
> compute client side backoff interval as follows:
> For each request made by ABFS driver, we will determine if they should 
> contribute to Client-Side Throttling based on the status code and result:
>  # Status code in 2xx range: Successful Operations should contribute.
>  # Status code in 3xx range: Redirection Operations should not contribute.
>  # Status code in 4xx range: User Errors should not contribute.
>  # Status code is 503: Throttling Error should contribute only if they are 
> due to client limits breach as follows:
>  ## 503, Ingress Over Account Limit: Should Contribute
>  ## 503, Egress Over Account Limit: Should Contribute
>  ## 503, TPS Over Account Limit: Should Contribute
>  ## 503, Other Server Throttling: Should not Contribute.
>  # Status code in 5xx range other than 503: Should not Contribute.
>  # IOException and UnknownHostExceptions: Should not Contribute.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19096: [Backport to 3.4][ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-04-11 Thread via GitHub


anujmodi2021 commented on PR #6720:
URL: https://github.com/apache/hadoop/pull/6720#issuecomment-2049412232

   @steveloughran 
   This is good to merge.
   The test failures here are known and fixed in PR: 
https://github.com/apache/hadoop/pull/6676


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836109#comment-17836109
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

anujmodi2021 commented on PR #6720:
URL: https://github.com/apache/hadoop/pull/6720#issuecomment-2049411445

   --
    AGGREGATED TEST RESULT 
   
   
   HNS-OAuth
   
   [ERROR] 
testListPathWithValueGreaterThanServerMaximum(org.apache.hadoop.fs.azurebfs.ITestAbfsClient)
  Time elapsed: 251.144 s  <<< FAILURE!
   
   [ERROR] 
test_120_terasort(org.apache.hadoop.fs.azurebfs.commit.ITestAbfsTerasort)  Time 
elapsed: 4.546 s  <<< ERROR!
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [ERROR] Tests run: 620, Failures: 1, Errors: 0, Skipped: 73
   [ERROR] Tests run: 380, Failures: 0, Errors: 1, Skipped: 55
   
   
   HNS-SharedKey
   
   [ERROR] 
testListPathWithValueGreaterThanServerMaximum(org.apache.hadoop.fs.azurebfs.ITestAbfsClient)
  Time elapsed: 244.141 s  <<< FAILURE!
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [ERROR] Tests run: 620, Failures: 1, Errors: 0, Skipped: 28
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 41
   
   
   NonHNS-SharedKey
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [WARNING] Tests run: 604, Failures: 0, Errors: 0, Skipped: 269
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 44
   
   
   AppendBlob-HNS-OAuth
   
   [ERROR] 
testCloseOfDataBlockOnAppendComplete(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemAppend)
  Time elapsed: 14.424 s  <<< FAILURE!
   [ERROR] 
testListPathWithValueGreaterThanServerMaximum(org.apache.hadoop.fs.azurebfs.ITestAbfsClient)
  Time elapsed: 234.055 s  <<< FAILURE!
   [ERROR] 
testAbfsStreamOps(org.apache.hadoop.fs.azurebfs.ITestAbfsStreamStatistics)  
Time elapsed: 6.224 s  <<< FAILURE!
   
   [ERROR] 
testExpect100ContinueFailureInAppend(org.apache.hadoop.fs.azurebfs.services.ITestAbfsOutputStream)
  Time elapsed: 9.868 s  <<< ERROR!
   [ERROR] 
testAppendWithChecksumAtDifferentOffsets(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemChecksum)
  Time elapsed: 15.112 s  <<< ERROR!
   [ERROR] 
testTwoWritersCreateAppendNoInfiniteLease(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemLease)
  Time elapsed: 4.308 s  <<< ERROR!
   [ERROR] 
test_120_terasort(org.apache.hadoop.fs.azurebfs.commit.ITestAbfsTerasort)  Time 
elapsed: 4.584 s  <<< ERROR!
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [ERROR] Tests run: 620, Failures: 2, Errors: 3, Skipped: 73
   [ERROR] Tests run: 380, Failures: 1, Errors: 1, Skipped: 79
   
   Time taken: 55 mins 56 secs.
   




> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> ABFS has a client-side throttling mechanism which works on the metrics 
> collected from past requests made. I requests are getting failed due to 
> throttling at server, we update our metrics and client side backoff is 
> calculated based on those metrics.
> This PR enhances the logic to decide which requests should be considered to 
> compute client side backoff interval as follows:
> For each request made by ABFS driver, we will determine if they should 
> contribute to Client-Side Throttling based on the status code and result:
>  # Status code in 2xx range: Successful Operations should contribute.
>  # Status code in 3xx range: Redirection Operations should not contribute.
>  # Status code in 4xx range: User Errors should not contribute.
>  # Status code is 503: Throttling Error should contribute only if they are 
> due to client limits breach as follows:
>  ## 503, Ingress Over Account Limit: Should Contribute
>  ## 503, Egress Over Account Limit: Should Contribute
>  ## 503, TPS Over Account Limit: Should Contribute
>  ## 503, Other Server Throttling: Should not Contribute.
>  # Status code in 5xx range other than 503: Should not Contribute.

[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836108#comment-17836108
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

anujmodi2021 opened a new pull request, #6720:
URL: https://github.com/apache/hadoop/pull/6720

   Description of PR
   Jira: https://issues.apache.org/jira/browse/HADOOP-19096 
   Commit in trunk: 
https://github.com/apache/hadoop/commit/dbe2d612586d7b78d61ef64c706eccf7fbf6f35c
   
   ABFS has a client-side throttling mechanism which works on the metrics 
collected from past requests made. I requests are getting failed due to 
throttling at server, we update our metrics and client side backoff is 
calculated based on those metrics.
   
   This PR enhances the logic to decide which requests should be considered to 
compute client side backoff interval as follows:
   
   For each request made by ABFS driver, we will determine if they should 
contribute to Client-Side Throttling based on the status code and result:
   
   Status code in 2xx range: Successful Operations should contribute.
   Status code in 3xx range: Redirection Operations should not contribute.
   Status code in 4xx range: User Errors should not contribute.
   Status code is 503: Throttling Error should contribute only if they are due 
to client limits breach as follows:
   503, Ingress Over Account Limit: Should Contribute
   503, Egress Over Account Limit: Should Contribute
   503, TPS Over Account Limit: Should Contribute
   503, Other Server Throttling: Should not Contribute.
   Status code in 5xx range other than 503: Should not Contribute.
   IOException and UnknownHostExceptions: Should not Contribute.




> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> ABFS has a client-side throttling mechanism which works on the metrics 
> collected from past requests made. I requests are getting failed due to 
> throttling at server, we update our metrics and client side backoff is 
> calculated based on those metrics.
> This PR enhances the logic to decide which requests should be considered to 
> compute client side backoff interval as follows:
> For each request made by ABFS driver, we will determine if they should 
> contribute to Client-Side Throttling based on the status code and result:
>  # Status code in 2xx range: Successful Operations should contribute.
>  # Status code in 3xx range: Redirection Operations should not contribute.
>  # Status code in 4xx range: User Errors should not contribute.
>  # Status code is 503: Throttling Error should contribute only if they are 
> due to client limits breach as follows:
>  ## 503, Ingress Over Account Limit: Should Contribute
>  ## 503, Egress Over Account Limit: Should Contribute
>  ## 503, TPS Over Account Limit: Should Contribute
>  ## 503, Other Server Throttling: Should not Contribute.
>  # Status code in 5xx range other than 503: Should not Contribute.
>  # IOException and UnknownHostExceptions: Should not Contribute.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19096: [Backport to 3.4][ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-04-11 Thread via GitHub


anujmodi2021 commented on PR #6720:
URL: https://github.com/apache/hadoop/pull/6720#issuecomment-2049411445

   --
    AGGREGATED TEST RESULT 
   
   
   HNS-OAuth
   
   [ERROR] 
testListPathWithValueGreaterThanServerMaximum(org.apache.hadoop.fs.azurebfs.ITestAbfsClient)
  Time elapsed: 251.144 s  <<< FAILURE!
   
   [ERROR] 
test_120_terasort(org.apache.hadoop.fs.azurebfs.commit.ITestAbfsTerasort)  Time 
elapsed: 4.546 s  <<< ERROR!
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [ERROR] Tests run: 620, Failures: 1, Errors: 0, Skipped: 73
   [ERROR] Tests run: 380, Failures: 0, Errors: 1, Skipped: 55
   
   
   HNS-SharedKey
   
   [ERROR] 
testListPathWithValueGreaterThanServerMaximum(org.apache.hadoop.fs.azurebfs.ITestAbfsClient)
  Time elapsed: 244.141 s  <<< FAILURE!
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [ERROR] Tests run: 620, Failures: 1, Errors: 0, Skipped: 28
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 41
   
   
   NonHNS-SharedKey
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [WARNING] Tests run: 604, Failures: 0, Errors: 0, Skipped: 269
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 44
   
   
   AppendBlob-HNS-OAuth
   
   [ERROR] 
testCloseOfDataBlockOnAppendComplete(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemAppend)
  Time elapsed: 14.424 s  <<< FAILURE!
   [ERROR] 
testListPathWithValueGreaterThanServerMaximum(org.apache.hadoop.fs.azurebfs.ITestAbfsClient)
  Time elapsed: 234.055 s  <<< FAILURE!
   [ERROR] 
testAbfsStreamOps(org.apache.hadoop.fs.azurebfs.ITestAbfsStreamStatistics)  
Time elapsed: 6.224 s  <<< FAILURE!
   
   [ERROR] 
testExpect100ContinueFailureInAppend(org.apache.hadoop.fs.azurebfs.services.ITestAbfsOutputStream)
  Time elapsed: 9.868 s  <<< ERROR!
   [ERROR] 
testAppendWithChecksumAtDifferentOffsets(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemChecksum)
  Time elapsed: 15.112 s  <<< ERROR!
   [ERROR] 
testTwoWritersCreateAppendNoInfiniteLease(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemLease)
  Time elapsed: 4.308 s  <<< ERROR!
   [ERROR] 
test_120_terasort(org.apache.hadoop.fs.azurebfs.commit.ITestAbfsTerasort)  Time 
elapsed: 4.584 s  <<< ERROR!
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [ERROR] Tests run: 620, Failures: 2, Errors: 3, Skipped: 73
   [ERROR] Tests run: 380, Failures: 1, Errors: 1, Skipped: 79
   
   Time taken: 55 mins 56 secs.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19096: [Backport to 3.4][ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-04-11 Thread via GitHub


anujmodi2021 opened a new pull request, #6720:
URL: https://github.com/apache/hadoop/pull/6720

   Description of PR
   Jira: https://issues.apache.org/jira/browse/HADOOP-19096 
   Commit in trunk: 
https://github.com/apache/hadoop/commit/dbe2d612586d7b78d61ef64c706eccf7fbf6f35c
   
   ABFS has a client-side throttling mechanism which works on the metrics 
collected from past requests made. I requests are getting failed due to 
throttling at server, we update our metrics and client side backoff is 
calculated based on those metrics.
   
   This PR enhances the logic to decide which requests should be considered to 
compute client side backoff interval as follows:
   
   For each request made by ABFS driver, we will determine if they should 
contribute to Client-Side Throttling based on the status code and result:
   
   Status code in 2xx range: Successful Operations should contribute.
   Status code in 3xx range: Redirection Operations should not contribute.
   Status code in 4xx range: User Errors should not contribute.
   Status code is 503: Throttling Error should contribute only if they are due 
to client limits breach as follows:
   503, Ingress Over Account Limit: Should Contribute
   503, Egress Over Account Limit: Should Contribute
   503, TPS Over Account Limit: Should Contribute
   503, Other Server Throttling: Should not Contribute.
   Status code in 5xx range other than 503: Should not Contribute.
   IOException and UnknownHostExceptions: Should not Contribute.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17458. Remove unnecessary BP lock in ReplicaMap. [hadoop]

2024-04-11 Thread via GitHub


Hexiaoqiao commented on PR #6717:
URL: https://github.com/apache/hadoop/pull/6717#issuecomment-2049395174

   @hfutatzhanghb Thanks for your works. We should be careful to remove BP lock 
here. List one of the changes as example, it will return one definite value 
before this PR because hold RW lock here, but uncertain after this PR, such as 
another thread invoke `map.put` between `map.get` and `return` it will return 
null, but if invoke `map.put` before them it will return one `ReplicaInfo` 
object. 
   
   ```
 ReplicaInfo get(String bpid, long blockId) {
   checkBlockPool(bpid);
   - try (AutoCloseDataSetLock l = 
lockManager.readLock(LockLevel.BLOCK_POOl, bpid)) {
   -   LightWeightResizableGSet m = map.get(bpid);
   -   return m != null ? m.get(new Block(blockId)) : null;
   - }
   + LightWeightResizableGSet m = map.get(bpid);
   + return m != null ? m.get(new Block(blockId)) : null;
 }
   ```
   
   I didn't traverse all invoker here, and not sure if it will involve some 
potential risk. FYI.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17455. Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt [hadoop]

2024-04-11 Thread via GitHub


Hexiaoqiao commented on PR #6710:
URL: https://github.com/apache/hadoop/pull/6710#issuecomment-2049353263

   Committed to trunk. Thanks @haiyang1987 and @ZanderXu .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17455. Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt [hadoop]

2024-04-11 Thread via GitHub


Hexiaoqiao merged PR #6710:
URL: https://github.com/apache/hadoop/pull/6710


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2049341199

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  41m 23s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/20/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 28 new + 17 unchanged - 0 
fixed = 45 total (was 17)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  4s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/20/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  39m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 147m 33s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.fileStatusInformationPresent;
 locked 66% of time  Unsynchronized access at AbfsInputStream.java:66% of time  
Unsynchronized access at AbfsInputStream.java:[line 691] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8fdf319ded11 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b1fd4436ebdbcd3260d52f84be212e0f4685ccaa |
   | Default Java | Private Build-1.8

[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836084#comment-17836084
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2049341199

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  41m 23s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/20/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 28 new + 17 unchanged - 0 
fixed = 45 total (was 17)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  4s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/20/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  39m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 147m 33s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.fileStatusInformationPresent;
 locked 66% of time  Unsynchronized access at AbfsInputStream.java:66% of time  
Unsynchronized access at AbfsInputStream.java:[line 691] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8fdf319ded11 5.15.0-94-generi

Re: [PR] HADOOP-19139.No GetPathStatus for opening AbfsInputStream [hadoop]

2024-04-11 Thread via GitHub


hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2049242814

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 58s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/19/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 28 new + 17 unchanged - 0 
fixed = 45 total (was 17)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  8s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/19/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  34m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 133m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.fileStatusInformationPresent;
 locked 66% of time  Unsynchronized access at AbfsInputStream.java:66% of time  
Unsynchronized access at AbfsInputStream.java:[line 685] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9c082578e4ba 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 45a07962736c8c64ceaedf05d21c2a95c5799f21 |
   | Default Java | Private Build-1.8

[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836073#comment-17836073
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2049242814

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 58s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/19/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 28 new + 17 unchanged - 0 
fixed = 45 total (was 17)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m  8s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/19/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  34m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 133m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.fileStatusInformationPresent;
 locked 66% of time  Unsynchronized access at AbfsInputStream.java:66% of time  
Unsynchronized access at AbfsInputStream.java:[line 685] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9c082578e4ba 5.15.0-94-generi

Re: [PR] HADOOP-19096: [ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-04-11 Thread via GitHub


anujmodi2021 commented on PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#issuecomment-2049233977

   > thanks, in trunk. @anmolanmol1234 can you do a pr and retest for 3.4? I'll 
rebase my work on this and see how it goes
   
   That would be me, not anmol.
   Will do a retest for 3.4
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836071#comment-17836071
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

anujmodi2021 commented on PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#issuecomment-2049233977

   > thanks, in trunk. @anmolanmol1234 can you do a pr and retest for 3.4? I'll 
rebase my work on this and see how it goes
   
   That would be me, not anmol.
   Will do a retest for 3.4
   




> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> ABFS has a client-side throttling mechanism which works on the metrics 
> collected from past requests made. I requests are getting failed due to 
> throttling at server, we update our metrics and client side backoff is 
> calculated based on those metrics.
> This PR enhances the logic to decide which requests should be considered to 
> compute client side backoff interval as follows:
> For each request made by ABFS driver, we will determine if they should 
> contribute to Client-Side Throttling based on the status code and result:
>  # Status code in 2xx range: Successful Operations should contribute.
>  # Status code in 3xx range: Redirection Operations should not contribute.
>  # Status code in 4xx range: User Errors should not contribute.
>  # Status code is 503: Throttling Error should contribute only if they are 
> due to client limits breach as follows:
>  ## 503, Ingress Over Account Limit: Should Contribute
>  ## 503, Egress Over Account Limit: Should Contribute
>  ## 503, TPS Over Account Limit: Should Contribute
>  ## 503, Other Server Throttling: Should not Contribute.
>  # Status code in 5xx range other than 503: Should not Contribute.
>  # IOException and UnknownHostExceptions: Should not Contribute.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18656) ABFS: Support for Pagination in Recursive Directory Delete

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836067#comment-17836067
 ] 

ASF GitHub Bot commented on HADOOP-18656:
-

anujmodi2021 commented on PR #6718:
URL: https://github.com/apache/hadoop/pull/6718#issuecomment-2049229760

   --
    AGGREGATED TEST RESULT 
   
   
   HNS-OAuth
   
   [ERROR] 
testListPathWithValueGreaterThanServerMaximum(org.apache.hadoop.fs.azurebfs.ITestAbfsClient)
  Time elapsed: 290.912 s  <<< FAILURE!
   
   [ERROR] 
test_120_terasort(org.apache.hadoop.fs.azurebfs.commit.ITestAbfsTerasort)  Time 
elapsed: 4.531 s  <<< ERROR!
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [ERROR] Tests run: 623, Failures: 1, Errors: 0, Skipped: 73
   [ERROR] Tests run: 380, Failures: 0, Errors: 1, Skipped: 55
   
   
   HNS-SharedKey
   
   [ERROR] 
testListPathWithValueGreaterThanServerMaximum(org.apache.hadoop.fs.azurebfs.ITestAbfsClient)
  Time elapsed: 237.663 s  <<< FAILURE!
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [ERROR] Tests run: 623, Failures: 1, Errors: 0, Skipped: 28
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 41
   
   
   NonHNS-SharedKey
   
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [WARNING] Tests run: 607, Failures: 0, Errors: 0, Skipped: 269
   [WARNING] Tests run: 380, Failures: 0, Errors: 0, Skipped: 44
   
   
   AppendBlob-HNS-OAuth
   
   [ERROR] 
testCloseOfDataBlockOnAppendComplete(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemAppend)
  Time elapsed: 9.219 s  <<< FAILURE!
   [ERROR] 
testListPathWithValueGreaterThanServerMaximum(org.apache.hadoop.fs.azurebfs.ITestAbfsClient)
  Time elapsed: 226.54 s  <<< FAILURE!
   [ERROR] 
testAbfsStreamOps(org.apache.hadoop.fs.azurebfs.ITestAbfsStreamStatistics)  
Time elapsed: 5.942 s  <<< FAILURE!
   
   [ERROR] 
testExpect100ContinueFailureInAppend(org.apache.hadoop.fs.azurebfs.services.ITestAbfsOutputStream)
  Time elapsed: 5.002 s  <<< ERROR!
   [ERROR] 
testAppendWithChecksumAtDifferentOffsets(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemChecksum)
  Time elapsed: 6.037 s  <<< ERROR!
   [ERROR] 
testTwoWritersCreateAppendNoInfiniteLease(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemLease)
  Time elapsed: 3.717 s  <<< ERROR!
   [ERROR] 
test_120_terasort(org.apache.hadoop.fs.azurebfs.commit.ITestAbfsTerasort)  Time 
elapsed: 4.503 s  <<< ERROR!
   
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [ERROR] Tests run: 623, Failures: 2, Errors: 3, Skipped: 73
   [ERROR] Tests run: 380, Failures: 1, Errors: 1, Skipped: 79
   
   Time taken: 60 mins 25 secs.
   
   




> ABFS: Support for Pagination in Recursive Directory Delete 
> ---
>
> Key: HADOOP-18656
> URL: https://issues.apache.org/jira/browse/HADOOP-18656
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.5
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Today, when a recursive delete is issued for a large directory in ADLS Gen2 
> (HNS) account, the directory deletion happens in O(1) but in backend ACL 
> Checks are done recursively for each object inside that directory which in 
> case of large directory could lead to request time out. Pagination is 
> introduced in the Azure Storage Backend for these ACL checks.
> More information on how pagination works can be found on public documentation 
> of [Azure Delete Path 
> API|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/delete?view=rest-storageservices-datalakestoragegen2-2019-12-12].
> This PR contains changes to support this from client side. To trigger 
> pagination, client needs to add a new query parameter "paginated" and set it 
> to true along with recursive set to true. In return if the directory is 
> large, server might return a continuation token back to the caller. If caller 
> gets back a continuation token, it has to call the delete API again with 
> continuation token along with recursive and pagination set to true. This is 
> similar to directory delete of FNS account.
> Pagination is availa

  1   2   >