[jira] [Commented] (HADOOP-19137) [ABFS]:Extra getAcl call while calling the very first API of FileSystem

2024-05-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845520#comment-17845520
 ] 

ASF GitHub Bot commented on HADOOP-19137:
-

hadoop-yetus commented on PR #6752:
URL: https://github.com/apache/hadoop/pull/6752#issuecomment-2105337986

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 02s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 00s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 00s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 00s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  | 108m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m 04s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   5m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   6m 00s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   5m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 176m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  | 179m 19s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 07s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 01s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 46s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 186m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   6m 21s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 499m 09s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6752 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 ac1dad05904f 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / d4cdd7e513407a1b5c4af35063e1f4dea146f068 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6752/5/testReport/
 |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6752/5/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> [ABFS]:Extra getAcl call while calling the very first API of FileSystem
> ---
>
> Key: HADOOP-19137
> URL: https://issues.apache.org/jira/browse/HADOOP-19137
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Store doesn't flow in the namespace information to the client. 
> In https://github.com/apache/hadoop/pull/6221, getIsNamespaceEnabled is added 
> in client methods which checks if namespace information is there or not, and 
> if not there, it will make getAcl call and set the field. Once the field is 
> set, it would be used in future getIsNamespaceEnabled method calls for a 
> given AbfsClient.
> Since, CPK both global and encryptionContext are only for hns account, the 
> fix that is proposed is that we would fail fs init if its non-hns account and 
> cpk config is given.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---

[jira] [Commented] (HADOOP-19167) Change of Codec configuration does not work

2024-05-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845511#comment-17845511
 ] 

ASF GitHub Bot commented on HADOOP-19167:
-

hadoop-yetus commented on PR #6807:
URL: https://github.com/apache/hadoop/pull/6807#issuecomment-2105295494

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 01s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 00s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  85m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  37m 46s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   4m 25s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   4m 14s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6807/2/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |   4m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 143m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  36m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  36m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 38s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   4m 30s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6807/2/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   4m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 149m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   5m 19s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 471m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 a388fdfcc139 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / cbd328a2a6aa1b7487409e6620451e91c0e0ae53 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6807/2/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6807/2/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Change of Codec configuration does not work
> ---
>
> Key: HADOOP-19167
> URL: https://issues.apache.org/jira/browse/HADOOP-19167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>  Labels: pull-request-available
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().get

[jira] [Commented] (HADOOP-19156) ZooKeeper based state stores use different ZK address configs

2024-05-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845450#comment-17845450
 ] 

ASF GitHub Bot commented on HADOOP-19156:
-

hadoop-yetus commented on PR #6767:
URL: https://github.com/apache/hadoop/pull/6767#issuecomment-2105055678

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 21s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   7m 11s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   6m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +0 :ok: |  spotbugs  |   0m 34s |  |  
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file 
(spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  38m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  19m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   7m  5s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   6m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +0 :ok: |  spotbugs  |   0m 36s |  |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from spotbugs  |
   | +1 :green_heart: |  shadedclient  |  38m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 11s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m  9s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 21s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m 16s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 109m 52s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |  34m 10s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 34s |  |  hadoop-yarn-site in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 470m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6767 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint 
markdownlint |
   | uname | Linux 61aa43f30264 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15

[jira] [Commented] (HADOOP-19013) fs.getXattrs(path) for S3FS doesn't have x-amz-server-side-encryption-aws-kms-key-id header.

2024-05-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845404#comment-17845404
 ] 

ASF GitHub Bot commented on HADOOP-19013:
-

hadoop-yetus commented on PR #6646:
URL: https://github.com/apache/hadoop/pull/6646#issuecomment-2104842997

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 01s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 01s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  92m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 02s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   4m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   5m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 151m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 05s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 01s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 164m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  22m 10s | 
[/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6646/3/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   6m 27s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 463m 54s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6646 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 ef1775384f33 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / cabd13a910c861153a9e699a0a85d504e55d7be7 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6646/3/testReport/
 |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6646/3/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> fs.getXattrs(path) for S3FS doesn't have 
> x-amz-server-side-encryption-aws-kms-key-id header.
> 
>
> Key: HADOOP-19013
> URL: https://issues.apache.org/jira/browse/HADOOP-19013
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>
> Once a path while uploading has been encrypted with SSE-KMS with a key id and 
> then later when we try to read the attributes of the same file, it doesn't 
> contain the key id information as an attribute. should we add it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845399#comment-17845399
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

hadoop-yetus commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2104796915

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 38s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/3/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 9 new + 23 
unchanged - 1 fixed = 32 total (was 24)  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  1s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 138m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6803 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 91ff72ba8905 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 18c4f3a6dfce2157f7829cc99ca406fa308505b5 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/3/testReport/ |
   | Max. process+thread count | 1275 (vs. ulimit

[jira] [Commented] (HADOOP-19168) Upgrade Kafka Clients due to CVEs

2024-05-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845303#comment-17845303
 ] 

ASF GitHub Bot commented on HADOOP-19168:
-

rohit-kb opened a new pull request, #6808:
URL: https://github.com/apache/hadoop/pull/6808

   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Upgrade Kafka Clients due to CVEs
> -
>
> Key: HADOOP-19168
> URL: https://issues.apache.org/jira/browse/HADOOP-19168
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Rohit Kumar
>Priority: Major
>
> Upgrade Kafka Clients due to CVEs
> CVE-2023-25194:- Affected versions of this package are vulnerable to 
> Deserialization of Untrusted Data when there are gadgets in the 
> {{{}classpath{}}}. The server will connect to the attacker's LDAP server and 
> deserialize the LDAP response, which the attacker can use to execute java 
> deserialization gadget chains on the Kafka connect server.
> CVSS Score:- 8.8(High)
> [https://nvd.nist.gov/vuln/detail/CVE-2023-25194] 
> CVE-2021-38153
> CVE-2018-17196
> Insufficient Entropy
> [https://security.snyk.io/package/maven/org.apache.kafka:kafka-clients] 
> Upgrade Kafka-Clients to 3.4.0 or higher.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19168) Upgrade Kafka Clients due to CVEs

2024-05-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19168:

Labels: pull-request-available  (was: )

> Upgrade Kafka Clients due to CVEs
> -
>
> Key: HADOOP-19168
> URL: https://issues.apache.org/jira/browse/HADOOP-19168
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Rohit Kumar
>Priority: Major
>  Labels: pull-request-available
>
> Upgrade Kafka Clients due to CVEs
> CVE-2023-25194:- Affected versions of this package are vulnerable to 
> Deserialization of Untrusted Data when there are gadgets in the 
> {{{}classpath{}}}. The server will connect to the attacker's LDAP server and 
> deserialize the LDAP response, which the attacker can use to execute java 
> deserialization gadget chains on the Kafka connect server.
> CVSS Score:- 8.8(High)
> [https://nvd.nist.gov/vuln/detail/CVE-2023-25194] 
> CVE-2021-38153
> CVE-2018-17196
> Insufficient Entropy
> [https://security.snyk.io/package/maven/org.apache.kafka:kafka-clients] 
> Upgrade Kafka-Clients to 3.4.0 or higher.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19168) Upgrade Kafka Clients due to CVEs

2024-05-10 Thread Rohit Kumar (Jira)
Rohit Kumar created HADOOP-19168:


 Summary: Upgrade Kafka Clients due to CVEs
 Key: HADOOP-19168
 URL: https://issues.apache.org/jira/browse/HADOOP-19168
 Project: Hadoop Common
  Issue Type: Task
Reporter: Rohit Kumar


Upgrade Kafka Clients due to CVEs

CVE-2023-25194:- Affected versions of this package are vulnerable to 
Deserialization of Untrusted Data when there are gadgets in the 
{{{}classpath{}}}. The server will connect to the attacker's LDAP server and 
deserialize the LDAP response, which the attacker can use to execute java 
deserialization gadget chains on the Kafka connect server.
CVSS Score:- 8.8(High)
[https://nvd.nist.gov/vuln/detail/CVE-2023-25194] 

CVE-2021-38153

CVE-2018-17196

Insufficient Entropy

[https://security.snyk.io/package/maven/org.apache.kafka:kafka-clients] 

Upgrade Kafka-Clients to 3.4.0 or higher.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19156) ZooKeeper based state stores use different ZK address configs

2024-05-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845243#comment-17845243
 ] 

ASF GitHub Bot commented on HADOOP-19156:
-

hadoop-yetus commented on PR #6767:
URL: https://github.com/apache/hadoop/pull/6767#issuecomment-2104169307

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 05s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m 01s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 30s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  | 126m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  59m 54s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   9m 24s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   7m 14s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6767/6/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |  46m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 290m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 54s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  33m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  56m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  56m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   9m 23s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   7m 16s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6767/6/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |  47m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 316m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   9m 17s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 911m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6767 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | MINGW64_NT-10.0-17763 d7c133dbab49 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 260ff827bf731de6129fe287e0007c1db35f61f1 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6767/6/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-hdfs-project/hadoop-hdfs-rbf U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6767/6/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> ZooKeeper based state stores use different ZK address configs
> -
>
> Key: HADOOP-19156
> URL: https://issues.apache.org/jira/browse/HADOOP-19156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liu bin
>Priority: Major
>  Labels: pull-request-available
>
> Currently, the Zookeeper-based state stores of RM, YARN Federation, and HDFS 

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845234#comment-17845234
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

hadoop-yetus commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2104125153

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 00s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m 00s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  90m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  40m 44s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   4m 36s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   4m 23s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/2/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |   4m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 150m 01s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 07s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  39m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  39m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 01s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 54s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   4m 28s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/2/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   4m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 161m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   5m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 501m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6803 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 0f898775fb38 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 4c3f83ae8ab0bbf066008f320acf05223bfa2bbb |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/2/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/2/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196

[jira] [Commented] (HADOOP-19137) [ABFS]:Extra getAcl call while calling the very first API of FileSystem

2024-05-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845233#comment-17845233
 ] 

ASF GitHub Bot commented on HADOOP-19137:
-

hadoop-yetus commented on PR #6752:
URL: https://github.com/apache/hadoop/pull/6752#issuecomment-2104119250

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 20s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 46s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  84m  7s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6752/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6752 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 38dd9da8f30d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d4cdd7e513407a1b5c4af35063e1f4dea146f068 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6752/12/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6752/12/console |
   | versions | git=2.25.1 maven=3.6.3

[jira] [Commented] (HADOOP-19167) Change of Codec configuration does not work

2024-05-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845216#comment-17845216
 ] 

ASF GitHub Bot commented on HADOOP-19167:
-

skyskyhu commented on PR #6807:
URL: https://github.com/apache/hadoop/pull/6807#issuecomment-2103973933

   @ChenSammi  @jojochuang  @ayushtkn Could you please help me review this PR 
when you have free time~ Thanks a lot.




> Change of Codec configuration does not work
> ---
>
> Key: HADOOP-19167
> URL: https://issues.apache.org/jira/browse/HADOOP-19167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>  Labels: pull-request-available
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(sequenceFilePath),
>                                 
> SequenceFile.Writer.keyClass(LongWritable.class),
>                                 
> SequenceFile.Writer.valueClass(BytesWritable.class),
>                                 
> SequenceFile.Writer.compression(CompressionType.BLOCK));
> The reason is SequenceFile.Writer.init() method will call 
> CodecPool.getCompressor(codec, null) to get a compressor. 
> If the compressor is a reused instance, the conf is not applied because it is 
> passed as null:
> public static Compressor getCompressor(CompressionCodec codec, Configuration 
> conf) {
> Compressor compressor = borrow(compressorPool, codec.getCompressorType());
> if (compressor == null)
> { compressor = codec.createCompressor(); LOG.info("Got brand-new compressor 
> ["+codec.getDefaultExtension()+"]"); }
> else {
> compressor.reinit(conf);   //conf is null here
> ..
>  
> Please also refer to my unit test to reproduce the bug. 
> To address this bug, I modified the code to ensure that the configuration is 
> read back from the codec when a compressor is reused.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19167) Change of Codec configuration does not work

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845202#comment-17845202
 ] 

ASF GitHub Bot commented on HADOOP-19167:
-

hadoop-yetus commented on PR #6807:
URL: https://github.com/apache/hadoop/pull/6807#issuecomment-2103901140

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  15m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  15m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 39s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 224m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6807/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6807 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 00683afafac5 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 
20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cbd328a2a6aa1b7487409e6620451e91c0e0ae53 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6807/2/testReport/ |
   | Max. process+thread count | 1429 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6807/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Change of Co

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845178#comment-17845178
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

ChenSammi commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2103782158

   > @ChenSammi , Please add your input as well in case I missed anything. 
Thanks.
   
   Yes, removing the synchronized directly have potential risks to sub classes 
of AbstractDelegationTokenSecretManager.  Using LOCK is a safe way to replace 
synchronized and can improve the concurrency too. That's the motivation of this 
implementation. @Hexiaoqiao . 
   
   




> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrementDelegationTokenSeqNum(ZKDelegationTokenSecretManager.java:601)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:402)
> - locked <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:48)
> at org.apache.hadoop.securit

[jira] [Commented] (HADOOP-18508) support multiple s3a integration test runs on same bucket in parallel

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845140#comment-17845140
 ] 

ASF GitHub Bot commented on HADOOP-18508:
-

hadoop-yetus commented on PR #5081:
URL: https://github.com/apache/hadoop/pull/5081#issuecomment-2103496441

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 04s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 00s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 00s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 00s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m 00s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m 00s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m 01s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 17 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 19s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  90m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  41m 33s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   6m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  12m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   9m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 167m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   8m 08s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  38m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  38m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   6m 08s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  12m 47s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |  10m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 182m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   6m 04s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 546m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/5081 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint 
markdownlint |
   | uname | MINGW64_NT-10.0-17763 7a8d877045a1 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / ad85edda7c00e61b40ad231a03c0fea8cda55ed7 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-5081/3/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-5081/3/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> support multiple s3a integration test runs on same bucket in parallel
> -
>
> Key: HADOOP-18508
> URL: https://issues.apache.org/jira/browse/HADOOP-18508
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.9
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> to have (internal, sorry) jenkins test runs work in parallel, they need to 
> share the same bucket so
> # must have a prefix for job id which is passed in to the path used for forks
> # support disabling root tests so they don't stamp on each other



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18958) UserGroupInformation debug log improve

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845063#comment-17845063
 ] 

ASF GitHub Bot commented on HADOOP-18958:
-

hadoop-yetus commented on PR #6255:
URL: https://github.com/apache/hadoop/pull/6255#issuecomment-2103047392

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 01s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 00s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 00s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 00s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 01s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  84m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  37m 18s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   4m 24s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   4m 14s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6255/3/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |   4m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 138m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  35m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  35m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 27s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   4m 22s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6255/3/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   4m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 152m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   6m 00s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 467m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 c0abc9d436db 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 090058953780461aff5f7f604d9af538e37b355b |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6255/3/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6255/3/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> UserGroupInformation debug log improve
> --
>
> Key: HADOOP-18958
> URL: https://issues.apache.org/jira/browse/HADOOP-18958
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0, 3.3.5
>Reporter: wangzhihui
>Priority: Minor
>  Labels: pull-request-available
> Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, 
> 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, 
> image-2023-10-30-14-35-11-161.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>       Using “new Exception( )” to print the call stack of "doAs Method " in 
> the UserGroupInformation class. Using this way will print meaningless 
> Exception information and too many call stacks, This is not conducive to 
> troubleshooting
> *example:*
> !20231029-122825.jpeg|width=991,heig

[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844991#comment-17844991
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

hadoop-yetus commented on PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#issuecomment-2102708307

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 03s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 00s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 00s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 00s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m 00s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  93m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 06s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   4m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 53s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 154m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  | 156m 46s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 09s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 167m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   5m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 439m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6552 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | MINGW64_NT-10.0-17763 ca0f186c13b4 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 820e2e0d0c98b3441a71f0101915990911052959 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6552/3/testReport/
 |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6552/3/console
 |
   | versions | git=2.45.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom impleme

[jira] [Commented] (HADOOP-19136) Upgrade commons-io to 2.16.1

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844987#comment-17844987
 ] 

ASF GitHub Bot commented on HADOOP-19136:
-

hadoop-yetus commented on PR #6704:
URL: https://github.com/apache/hadoop/pull/6704#issuecomment-2102668027

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 00s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m 00s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 00s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shellcheck  |   0m 00s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m 00s |  |  Shelldocs was not available.  |
   | +0 :ok: |  xmllint  |   0m 01s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m 00s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 12s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  87m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  38m 41s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |  22m 37s | 
[/branch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6704/4/artifact/out/branch-mvnsite-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  javadoc  |  15m 53s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 312m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  86m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  38m 02s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  38m 02s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 01s |  |  The patch has no blanks 
issues.  |
   | -1 :x: |  mvnsite  |  22m 08s | 
[/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6704/4/artifact/out/patch-mvnsite-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  javadoc  |  15m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 184m 05s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   6m 12s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 640m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6704 |
   | Optional Tests | dupname asflicense codespell detsecrets shellcheck 
shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient xmllint |
   | uname | MINGW64_NT-10.0-17763 0443251d679b 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 4fbf034738fb716941c3f875cb6b6b8fd200db8f |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6704/4/testReport/
 |
   | modules | C: hadoop-project . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6704/4/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Upgrade commons-io to 2.16.1
> 
>
> Key: HADOOP-19136
> URL: https://issues.apache.org/jira/browse/HADOOP-19136
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.1
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>
> commons-io can be upgraded from 2.14.0 to 2.16.0, try to upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19131) Assist reflection IO with WrappedOperations class

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844984#comment-17844984
 ] 

ASF GitHub Bot commented on HADOOP-19131:
-

hadoop-yetus commented on PR #6686:
URL: https://github.com/apache/hadoop/pull/6686#issuecomment-2102654014

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 02s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m 01s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 19s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  92m 06s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  41m 46s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   6m 04s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   4m 31s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6686/6/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |   9m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 165m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  | 168m 04s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 19s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   8m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  38m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  38m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   6m 07s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   4m 33s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6686/6/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   9m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 175m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   5m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 538m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6686 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | MINGW64_NT-10.0-17763 ea36ff49e5e4 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 0dad2aa4583bda451a2a170222f0cde21803498a |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6686/6/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6686/6/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Assist reflection IO with WrappedOperations class
> -
>
> Key: HADOOP-19131
> URL: https://issues.apache.org/jira/browse/HADOOP-19131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> parquet, avro etc are still stuck building with older 

[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844977#comment-17844977
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2102606424

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 04s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m 04s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 04s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m 04s |  |  markdownlint was not available.  
|
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 19s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  92m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  41m 00s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   6m 19s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   4m 34s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6699/5/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |   9m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 158m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  | 161m 17s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 20s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   7m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  38m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  38m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   6m 18s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   4m 39s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6699/5/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   9m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 172m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   5m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 529m 07s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | MINGW64_NT-10.0-17763 f5a1d16b41d2 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 20f4f2b3efb1e27b37011f08dfdb0959d9626f10 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6699/5/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-azure 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6699/5/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> [ABFS]: No GetPathStatus call for opening AbfsInputStream
> -
>
> Key: HADOOP-19139
> URL: https://issues.apache.org/jira/browse/HADOOP-19139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Read API gives contentLen and etag of the path. This information would be 
> us

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844971#comment-17844971
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

vikaskr22 commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2102594497

   Hi @Hexiaoqiao , technically I didn't see or observe any issues till now but 
@ChenSammi has some concerns related to concurrency in other sub classes of 
AbstractDelegationTokenSecretManager.java .
   
   I had done testing with  RangerKMS  and there 
ZKDelegationTokenSecretManager.java is used as implementing class. That means, 
our test suite only verifies ZK specific cases. There are other implementation 
that is for HDFS federation, YARN etc. I 
   don't have any test suite that ensures concurrency stability for other 
implementations as well. 
   
   @ChenSammi point is, as part of last commit, I removed "synchronization" 
from super class and took care of concurrency in ZK sub class. So I was 
breaking concurrency at API level and all the implementing classes needs to 
take care of that. and at the same time, I don't have test suite internally 
that verifies anything except Ranger KMS DT usage.
   
   One more point she raised  that, what about any other subclass outside this 
repo? They might have written their code in assumption that lock has been 
acquired at API level. 
   
   Although I went through the source code of other implementing classes, and 
for me it still seems good. But due to lack of automated test suite and 
concurrency nature, I agreed to implement using Lock API.
   
   Using ReadWriteLock API, at least I am able to unblock multiple reader 
threads, like verifyToken(). Now verifyToken() can be invoked by multiple 
threads due to ReadLock nature. And write lock will ensure thread safety at API 
level.
   
   @ChenSammi , Please add your input as well in case I missed anything. Thanks.




> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93

[jira] [Commented] (HADOOP-18679) Add API for bulk/paged object deletion

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844953#comment-17844953
 ] 

ASF GitHub Bot commented on HADOOP-18679:
-

hadoop-yetus commented on PR #6726:
URL: https://github.com/apache/hadoop/pull/6726#issuecomment-2102481437

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 07s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m 01s |  |  xmllint was not available.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  markdownlint  |   0m 01s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m 01s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  | 107m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  48m 30s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   7m 10s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   5m 19s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6726/5/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |  23m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 225m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  | 228m 42s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 57s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  19m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  45m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  45m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 01s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   7m 25s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   5m 28s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6726/5/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |  24m 00s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 232m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   7m 14s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 700m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6726 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle 
markdownlint |
   | uname | MINGW64_NT-10.0-17763 296d6abd6fb2 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / e37d88f764665c8530097bbed890a5935a5fd1f0 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6726/5/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-azure U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6726/5/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Add API for bulk/paged object deletion
> --
>
> Key: HADOOP-18679
> URL: https://issues.apache.org/jira/browse/HADOOP-18679
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>

[jira] [Commented] (HADOOP-19167) Change of Codec configuration does not work

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844925#comment-17844925
 ] 

ASF GitHub Bot commented on HADOOP-19167:
-

skyskyhu opened a new pull request, #6807:
URL: https://github.com/apache/hadoop/pull/6807

   [HADOOP-19167](https://issues.apache.org/jira/browse/HADOOP-19167) Change of 
Codec configuration does not work
   
   ### Description of PR
   In one of my projects, I need to dynamically adjust compression level for 
different files. 
   However, I found that in most cases the new compression level does not take 
effect as expected, the old compression level continues to be used.
   Here is the relevant code snippet:
   ```
   ZStandardCodec zStandardCodec = new ZStandardCodec();
   zStandardCodec.setConf(conf);
   conf.set("io.compression.codec.zstd.level", "5"); // level may change 
dynamically
   conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
   writer = SequenceFile.createWriter(conf, 
SequenceFile.Writer.file(sequenceFilePath),
   
SequenceFile.Writer.keyClass(LongWritable.class),
   
SequenceFile.Writer.valueClass(BytesWritable.class),
   
SequenceFile.Writer.compression(CompressionType.BLOCK));
   ```
   
   Take my unit test as another example:
   ```
   DefaultCodec codec1 = new DefaultCodec();
   Configuration conf = new Configuration();
   ZlibFactory.setCompressionLevel(conf, CompressionLevel.TWO);
   codec1.setConf(conf);
   Compressor comp1 = CodecPool.getCompressor(codec1);
   CodecPool.returnCompressor(comp1);
   
   DefaultCodec codec2 = new DefaultCodec();
   Configuration conf2 = new Configuration();
   CompressionLevel newCompressionLevel = CompressionLevel.THREE;
   ZlibFactory.setCompressionLevel(conf2, newCompressionLevel);
   codec2.setConf(conf2);
   Compressor comp2 = CodecPool.getCompressor(codec2);
   ```
   In the current code, the compression level of comp2 is 2, rather than the 
intended level of 3.
   
   The reason is SequenceFile.Writer.init() method will call 
CodecPool.getCompressor(codec) to get a compressor, eventually 
CodecPool.getCompressor(codec, null) will be called.
   If the compressor is a reused instance, the conf is not applied because it 
is passed as null:
   ```
   public static Compressor getCompressor(CompressionCodec codec, Configuration 
conf) {
   Compressor compressor = borrow(compressorPool, codec.getCompressorType());
   if (compressor == null) {
 compressor = codec.createCompressor(); 
 LOG.info("Got brand-new compressor ["+codec.getDefaultExtension()+"]"); 
   } else {
   compressor.reinit(conf);   //conf is null here
   ..
   ```
   
   Please also refer to my unit test to reproduce the bug. 
   To address this bug, I modified the code to ensure that the configuration is 
read back from the codec when a compressor is reused.
   
   ### How was this patch tested?
   unit test 




> Change of Codec configuration does not work
> ---
>
> Key: HADOOP-19167
> URL: https://issues.apache.org/jira/browse/HADOOP-19167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>  Labels: pull-request-available
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(sequenceFilePath),
>                                 
> SequenceFile.Writer.keyClass(LongWritable.class),
>                                 
> SequenceFile.Writer.valueClass(BytesWritable.class),
>                                 
> SequenceFile.Writer.compression(CompressionType.BLOCK));
> The reason is SequenceFile.Writer.init() method will call 
> CodecPool.getCompressor(codec, null) to get a compressor. 
> If the compressor is a reused instance, the conf is not applied because it is 
> passed as null:
> public static Compressor getCompressor(CompressionCodec codec, Configuration 
> conf) {
> Compressor compressor = borrow(compressorPool, codec.getCompres

[jira] [Commented] (HADOOP-19167) Change of Codec configuration does not work

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844920#comment-17844920
 ] 

ASF GitHub Bot commented on HADOOP-19167:
-

skyskyhu closed pull request #6798: HADOOP-19167 Bug Fix: Change of Codec 
configuration does not work
URL: https://github.com/apache/hadoop/pull/6798




> Change of Codec configuration does not work
> ---
>
> Key: HADOOP-19167
> URL: https://issues.apache.org/jira/browse/HADOOP-19167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>  Labels: pull-request-available
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(sequenceFilePath),
>                                 
> SequenceFile.Writer.keyClass(LongWritable.class),
>                                 
> SequenceFile.Writer.valueClass(BytesWritable.class),
>                                 
> SequenceFile.Writer.compression(CompressionType.BLOCK));
> The reason is SequenceFile.Writer.init() method will call 
> CodecPool.getCompressor(codec, null) to get a compressor. 
> If the compressor is a reused instance, the conf is not applied because it is 
> passed as null:
> public static Compressor getCompressor(CompressionCodec codec, Configuration 
> conf) {
> Compressor compressor = borrow(compressorPool, codec.getCompressorType());
> if (compressor == null)
> { compressor = codec.createCompressor(); LOG.info("Got brand-new compressor 
> ["+codec.getDefaultExtension()+"]"); }
> else {
> compressor.reinit(conf);   //conf is null here
> ..
>  
> Please also refer to my unit test to reproduce the bug. 
> To address this bug, I modified the code to ensure that the configuration is 
> read back from the codec when a compressor is reused.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18610) ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844883#comment-17844883
 ] 

ASF GitHub Bot commented on HADOOP-18610:
-

anujmodi2021 commented on code in PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#discussion_r1595039239


##
hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md:
##
@@ -879,6 +879,42 @@ hierarchical namespace enabled, and set the following 
configuration settings:

   -->
 
+  
+  
> Key: HADOOP-18610
> URL: https://issues.apache.org/jira/browse/HADOOP-18610
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.3.4
>Reporter: Haifeng Chen
>Assignee: Anuj Modi
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HADOOP-18610-preview.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In Jan 2023, Microsoft Azure AKS replaced its original pod-managed identity 
> with with [Azure Active Directory (Azure AD) workload 
> identities|https://learn.microsoft.com/en-us/azure/active-directory/develop/workload-identities-overview]
>  (preview), which integrate with the Kubernetes native capabilities to 
> federate with any external identity providers. This approach is simpler to 
> use and deploy.
> Refer to 
> [https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview|https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview.]
>  and [https://azure.github.io/azure-workload-identity/docs/introduction.html] 
> for more details.
> The basic use scenario is to access Azure cloud resources (such as cloud 
> storage) from Kubernetes (such as AKS) workload using Azure managed identity 
> federated with Kubernetes service account. The credential environment 
> variables in pod projected by Azure AD workload identity are like following:
> AZURE_AUTHORITY_HOST: (Injected by the webhook, 
> [https://login.microsoftonline.com/])
> AZURE_CLIENT_ID: (Injected by the webhook)
> AZURE_TENANT_ID: (Injected by the webhook)
> AZURE_FEDERATED_TOKEN_FILE: (Injected by the webhook, 
> /var/run/secrets/azure/tokens/azure-identity-token)
> The token in the file pointed by AZURE_FEDERATED_TOKEN_FILE is a JWT (JASON 
> Web Token) client assertion token which we can use to request to 
> AZURE_AUTHORITY_HOST (url is  AZURE_AUTHORITY_HOST + tenantId + 
> "/oauth2/v2.0/token")  for a AD token which can be used to directly access 
> the Azure cloud resources.
> This approach is very common and similar among cloud providers such as AWS 
> and GCP. Hadoop AWS integration has WebIdentityTokenCredentialProvider to 
> handle the same case.
> The existing MsiTokenProvider can only handle the managed identity associated 
> with Azure VM instance. We need to implement a WorkloadIdentityTokenProvider 
> which handle Azure Workload Identity case. For this, we need to add one 
> method (getTokenUsingJWTAssertion) in AzureADAuthenticator which will be used 
> by WorkloadIdentityTokenProvider.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18610) ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844882#comment-17844882
 ] 

ASF GitHub Bot commented on HADOOP-18610:
-

anujmodi2021 commented on code in PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#discussion_r1595038507


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/WorkloadIdentityTokenProvider.java:
##
@@ -0,0 +1,144 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.oauth2;
+
+import java.io.File;
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.thirdparty.com.google.common.base.Strings;
+import org.apache.hadoop.util.Preconditions;
+
+/**
+ * Provides tokens based on Azure AD Workload Identity.
+ */
+public class WorkloadIdentityTokenProvider extends AccessTokenProvider {
+
+  private static final String OAUTH2_TOKEN_PATH = "/oauth2/v2.0/token";
+  private static final long ONE_HOUR = 3600 * 1000;
+  private static final Logger LOG = 
LoggerFactory.getLogger(AccessTokenProvider.class);
+
+  private final String authEndpoint;
+  private final String clientId;
+  private final String tokenFile;
+  private long tokenFetchTime = -1;
+
+  public WorkloadIdentityTokenProvider(final String authority, final String 
tenantId,
+  final String clientId, final String tokenFile) {
+Preconditions.checkNotNull(authority, "authority");
+Preconditions.checkNotNull(tenantId, "tenantId");
+Preconditions.checkNotNull(clientId, "clientId");
+Preconditions.checkNotNull(tokenFile, "tokenFile");
+
+this.authEndpoint = authority + tenantId + OAUTH2_TOKEN_PATH;
+this.clientId = clientId;
+this.tokenFile = tokenFile;
+  }
+
+  @Override
+  protected AzureADToken refreshToken() throws IOException {
+LOG.debug("AADToken: refreshing token from JWT Assertion");
+String clientAssertion = getClientAssertion();
+AzureADToken token = getTokenUsingJWTAssertion(clientAssertion);
+tokenFetchTime = System.currentTimeMillis();
+return token;
+  }
+
+  /**
+   * Gets the Azure AD token from a client assertion in JWT format.
+   * This method exists to make unit testing possible.
+   *
+   * @param clientAssertion the client assertion.
+   * @return the Azure AD token.
+   * @throws IOException if there is a failure in connecting to Azure AD.
+   */
+  @VisibleForTesting
+  AzureADToken getTokenUsingJWTAssertion(String clientAssertion) throws 
IOException {
+return AzureADAuthenticator
+.getTokenUsingJWTAssertion(authEndpoint, clientId, clientAssertion);
+  }
+
+  /**
+   * Checks if the token is about to expire as per base expiry logic.
+   * Otherwise, try to expire if enough time has elapsed since the last 
refresh.
+   *
+   * @return true if the token is expiring in next 1 hour or if a token has
+   * never been fetched
+   */
+  @Override
+  protected boolean isTokenAboutToExpire() {
+return super.isTokenAboutToExpire() || 
hasEnoughTimeElapsedSinceLastRefresh();
+  }
+
+  /**
+   * Checks to see if enough time has elapsed since the last token refresh.
+   *
+   * @return true if the token was last refreshed more than an hour ago.
+   */
+  protected boolean hasEnoughTimeElapsedSinceLastRefresh() {
+if (getTokenFetchTime() == -1) {
+  return true;
+}
+boolean expiring = false;
+long elapsedTimeSinceLastTokenRefreshInMillis =
+System.currentTimeMillis() - getTokenFetchTime();
+// In case token is not refreshed for 1 hr or any clock skew issues,
+// refresh token.
+expiring = elapsedTimeSinceLastTokenRefreshInMillis >= ONE_HOUR
+|| elapsedTimeSinceLastTokenRefreshInMillis < 0;

Review Comment:
   This is also same logic ported from MSI Token provider.





> ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS
> 

[jira] [Commented] (HADOOP-18610) ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844881#comment-17844881
 ] 

ASF GitHub Bot commented on HADOOP-18610:
-

anujmodi2021 commented on code in PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#discussion_r1595037949


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/WorkloadIdentityTokenProvider.java:
##
@@ -0,0 +1,144 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.oauth2;
+
+import java.io.File;
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.thirdparty.com.google.common.base.Strings;
+import org.apache.hadoop.util.Preconditions;
+
+/**
+ * Provides tokens based on Azure AD Workload Identity.
+ */
+public class WorkloadIdentityTokenProvider extends AccessTokenProvider {
+
+  private static final String OAUTH2_TOKEN_PATH = "/oauth2/v2.0/token";
+  private static final long ONE_HOUR = 3600 * 1000;
+  private static final Logger LOG = 
LoggerFactory.getLogger(AccessTokenProvider.class);
+
+  private final String authEndpoint;
+  private final String clientId;
+  private final String tokenFile;
+  private long tokenFetchTime = -1;
+
+  public WorkloadIdentityTokenProvider(final String authority, final String 
tenantId,
+  final String clientId, final String tokenFile) {
+Preconditions.checkNotNull(authority, "authority");
+Preconditions.checkNotNull(tenantId, "tenantId");
+Preconditions.checkNotNull(clientId, "clientId");
+Preconditions.checkNotNull(tokenFile, "tokenFile");
+
+this.authEndpoint = authority + tenantId + OAUTH2_TOKEN_PATH;
+this.clientId = clientId;
+this.tokenFile = tokenFile;
+  }
+
+  @Override
+  protected AzureADToken refreshToken() throws IOException {
+LOG.debug("AADToken: refreshing token from JWT Assertion");
+String clientAssertion = getClientAssertion();
+AzureADToken token = getTokenUsingJWTAssertion(clientAssertion);
+tokenFetchTime = System.currentTimeMillis();
+return token;
+  }
+
+  /**
+   * Gets the Azure AD token from a client assertion in JWT format.
+   * This method exists to make unit testing possible.
+   *
+   * @param clientAssertion the client assertion.
+   * @return the Azure AD token.
+   * @throws IOException if there is a failure in connecting to Azure AD.
+   */
+  @VisibleForTesting
+  AzureADToken getTokenUsingJWTAssertion(String clientAssertion) throws 
IOException {
+return AzureADAuthenticator
+.getTokenUsingJWTAssertion(authEndpoint, clientId, clientAssertion);
+  }
+
+  /**
+   * Checks if the token is about to expire as per base expiry logic.
+   * Otherwise, try to expire if enough time has elapsed since the last 
refresh.
+   *
+   * @return true if the token is expiring in next 1 hour or if a token has
+   * never been fetched
+   */
+  @Override
+  protected boolean isTokenAboutToExpire() {
+return super.isTokenAboutToExpire() || 
hasEnoughTimeElapsedSinceLastRefresh();
+  }
+
+  /**
+   * Checks to see if enough time has elapsed since the last token refresh.
+   *
+   * @return true if the token was last refreshed more than an hour ago.
+   */
+  protected boolean hasEnoughTimeElapsedSinceLastRefresh() {
+if (getTokenFetchTime() == -1) {
+  return true;
+}
+boolean expiring = false;
+long elapsedTimeSinceLastTokenRefreshInMillis =
+System.currentTimeMillis() - getTokenFetchTime();
+// In case token is not refreshed for 1 hr or any clock skew issues,
+// refresh token.
+expiring = elapsedTimeSinceLastTokenRefreshInMillis >= ONE_HOUR

Review Comment:
   This is the same logic used today for MSI Tokens as well. I guess original 
author ported the logic from there itself.
   Also, as a recent change made by Azure services, the lifetime of an azure 
token was made variable, earlier I guess it was 1 hour 

[jira] [Commented] (HADOOP-19152) Do not hard code security providers.

2024-05-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844879#comment-17844879
 ] 

ASF GitHub Bot commented on HADOOP-19152:
-

hadoop-yetus commented on PR #6739:
URL: https://github.com/apache/hadoop/pull/6739#issuecomment-2102051696

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 03s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m 01s |  |  xmllint was not available.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 01s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  | 127m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  60m 04s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   7m 04s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   6m 48s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6739/6/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |   7m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 199m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   7m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  56m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  56m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 01s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   7m 08s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   6m 46s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6739/6/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   7m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 227m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   9m 14s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 700m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6739 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | MINGW64_NT-10.0-17763 d18de83f28f2 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 6494370d68379abf2a4adbd0753f9a798abaad81 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6739/6/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6739/6/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Do not hard code security providers.
> 
>
> Key: HADOOP-19152
> URL: https://issues.apache.org/jira/browse/HADOOP-19152
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
>
> In order to support different security providers in different clusters, we 
> should not hard code a provider in our code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18610) ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844862#comment-17844862
 ] 

ASF GitHub Bot commented on HADOOP-18610:
-

anujmodi2021 commented on code in PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#discussion_r1594972228


##
hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md:
##
@@ -879,6 +879,42 @@ hierarchical namespace enabled, and set the following 
configuration settings:

   -->
 
+  
+  
> Key: HADOOP-18610
> URL: https://issues.apache.org/jira/browse/HADOOP-18610
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.3.4
>Reporter: Haifeng Chen
>Assignee: Anuj Modi
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HADOOP-18610-preview.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In Jan 2023, Microsoft Azure AKS replaced its original pod-managed identity 
> with with [Azure Active Directory (Azure AD) workload 
> identities|https://learn.microsoft.com/en-us/azure/active-directory/develop/workload-identities-overview]
>  (preview), which integrate with the Kubernetes native capabilities to 
> federate with any external identity providers. This approach is simpler to 
> use and deploy.
> Refer to 
> [https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview|https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview.]
>  and [https://azure.github.io/azure-workload-identity/docs/introduction.html] 
> for more details.
> The basic use scenario is to access Azure cloud resources (such as cloud 
> storage) from Kubernetes (such as AKS) workload using Azure managed identity 
> federated with Kubernetes service account. The credential environment 
> variables in pod projected by Azure AD workload identity are like following:
> AZURE_AUTHORITY_HOST: (Injected by the webhook, 
> [https://login.microsoftonline.com/])
> AZURE_CLIENT_ID: (Injected by the webhook)
> AZURE_TENANT_ID: (Injected by the webhook)
> AZURE_FEDERATED_TOKEN_FILE: (Injected by the webhook, 
> /var/run/secrets/azure/tokens/azure-identity-token)
> The token in the file pointed by AZURE_FEDERATED_TOKEN_FILE is a JWT (JASON 
> Web Token) client assertion token which we can use to request to 
> AZURE_AUTHORITY_HOST (url is  AZURE_AUTHORITY_HOST + tenantId + 
> "/oauth2/v2.0/token")  for a AD token which can be used to directly access 
> the Azure cloud resources.
> This approach is very common and similar among cloud providers such as AWS 
> and GCP. Hadoop AWS integration has WebIdentityTokenCredentialProvider to 
> handle the same case.
> The existing MsiTokenProvider can only handle the managed identity associated 
> with Azure VM instance. We need to implement a WorkloadIdentityTokenProvider 
> which handle Azure Workload Identity case. For this, we need to add one 
> method (getTokenUsingJWTAssertion) in AzureADAuthenticator which will be used 
> by WorkloadIdentityTokenProvider.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18610) ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844860#comment-17844860
 ] 

ASF GitHub Bot commented on HADOOP-18610:
-

anujmodi2021 commented on code in PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#discussion_r1594954265


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/WorkloadIdentityTokenProvider.java:
##
@@ -0,0 +1,144 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.oauth2;
+
+import java.io.File;
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.thirdparty.com.google.common.base.Strings;
+import org.apache.hadoop.util.Preconditions;
+
+/**
+ * Provides tokens based on Azure AD Workload Identity.
+ */
+public class WorkloadIdentityTokenProvider extends AccessTokenProvider {
+
+  private static final String OAUTH2_TOKEN_PATH = "/oauth2/v2.0/token";
+  private static final long ONE_HOUR = 3600 * 1000;
+  private static final Logger LOG = 
LoggerFactory.getLogger(AccessTokenProvider.class);
+
+  private final String authEndpoint;
+  private final String clientId;
+  private final String tokenFile;
+  private long tokenFetchTime = -1;
+
+  public WorkloadIdentityTokenProvider(final String authority, final String 
tenantId,
+  final String clientId, final String tokenFile) {
+Preconditions.checkNotNull(authority, "authority");
+Preconditions.checkNotNull(tenantId, "tenantId");
+Preconditions.checkNotNull(clientId, "clientId");
+Preconditions.checkNotNull(tokenFile, "tokenFile");
+
+this.authEndpoint = authority + tenantId + OAUTH2_TOKEN_PATH;
+this.clientId = clientId;
+this.tokenFile = tokenFile;
+  }
+
+  @Override
+  protected AzureADToken refreshToken() throws IOException {
+LOG.debug("AADToken: refreshing token from JWT Assertion");
+String clientAssertion = getClientAssertion();
+AzureADToken token = getTokenUsingJWTAssertion(clientAssertion);
+tokenFetchTime = System.currentTimeMillis();
+return token;
+  }
+
+  /**
+   * Gets the Azure AD token from a client assertion in JWT format.
+   * This method exists to make unit testing possible.
+   *
+   * @param clientAssertion the client assertion.
+   * @return the Azure AD token.
+   * @throws IOException if there is a failure in connecting to Azure AD.
+   */
+  @VisibleForTesting
+  AzureADToken getTokenUsingJWTAssertion(String clientAssertion) throws 
IOException {
+return AzureADAuthenticator
+.getTokenUsingJWTAssertion(authEndpoint, clientId, clientAssertion);
+  }
+
+  /**
+   * Checks if the token is about to expire as per base expiry logic.
+   * Otherwise, try to expire if enough time has elapsed since the last 
refresh.
+   *
+   * @return true if the token is expiring in next 1 hour or if a token has
+   * never been fetched
+   */
+  @Override
+  protected boolean isTokenAboutToExpire() {
+return super.isTokenAboutToExpire() || 
hasEnoughTimeElapsedSinceLastRefresh();
+  }
+
+  /**
+   * Checks to see if enough time has elapsed since the last token refresh.
+   *
+   * @return true if the token was last refreshed more than an hour ago.
+   */
+  protected boolean hasEnoughTimeElapsedSinceLastRefresh() {
+if (getTokenFetchTime() == -1) {
+  return true;
+}
+boolean expiring = false;
+long elapsedTimeSinceLastTokenRefreshInMillis =
+System.currentTimeMillis() - getTokenFetchTime();
+// In case token is not refreshed for 1 hr or any clock skew issues,
+// refresh token.
+expiring = elapsedTimeSinceLastTokenRefreshInMillis >= ONE_HOUR
+|| elapsedTimeSinceLastTokenRefreshInMillis < 0;

Review Comment:
   There is a debug log added. Should I change it to warn then??





> ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS
> 

[jira] [Commented] (HADOOP-18610) ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844859#comment-17844859
 ] 

ASF GitHub Bot commented on HADOOP-18610:
-

anujmodi2021 commented on code in PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#discussion_r1594952840


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/WorkloadIdentityTokenProvider.java:
##
@@ -0,0 +1,144 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.oauth2;
+
+import java.io.File;
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.thirdparty.com.google.common.base.Strings;
+import org.apache.hadoop.util.Preconditions;
+
+/**
+ * Provides tokens based on Azure AD Workload Identity.
+ */
+public class WorkloadIdentityTokenProvider extends AccessTokenProvider {
+
+  private static final String OAUTH2_TOKEN_PATH = "/oauth2/v2.0/token";
+  private static final long ONE_HOUR = 3600 * 1000;
+  private static final Logger LOG = 
LoggerFactory.getLogger(AccessTokenProvider.class);
+
+  private final String authEndpoint;
+  private final String clientId;
+  private final String tokenFile;
+  private long tokenFetchTime = -1;
+
+  public WorkloadIdentityTokenProvider(final String authority, final String 
tenantId,
+  final String clientId, final String tokenFile) {
+Preconditions.checkNotNull(authority, "authority");
+Preconditions.checkNotNull(tenantId, "tenantId");
+Preconditions.checkNotNull(clientId, "clientId");
+Preconditions.checkNotNull(tokenFile, "tokenFile");
+
+this.authEndpoint = authority + tenantId + OAUTH2_TOKEN_PATH;
+this.clientId = clientId;
+this.tokenFile = tokenFile;
+  }
+
+  @Override
+  protected AzureADToken refreshToken() throws IOException {
+LOG.debug("AADToken: refreshing token from JWT Assertion");
+String clientAssertion = getClientAssertion();
+AzureADToken token = getTokenUsingJWTAssertion(clientAssertion);
+tokenFetchTime = System.currentTimeMillis();
+return token;
+  }
+
+  /**
+   * Gets the Azure AD token from a client assertion in JWT format.
+   * This method exists to make unit testing possible.
+   *
+   * @param clientAssertion the client assertion.
+   * @return the Azure AD token.
+   * @throws IOException if there is a failure in connecting to Azure AD.
+   */
+  @VisibleForTesting
+  AzureADToken getTokenUsingJWTAssertion(String clientAssertion) throws 
IOException {
+return AzureADAuthenticator
+.getTokenUsingJWTAssertion(authEndpoint, clientId, clientAssertion);
+  }
+
+  /**
+   * Checks if the token is about to expire as per base expiry logic.
+   * Otherwise, try to expire if enough time has elapsed since the last 
refresh.
+   *
+   * @return true if the token is expiring in next 1 hour or if a token has
+   * never been fetched
+   */
+  @Override
+  protected boolean isTokenAboutToExpire() {
+return super.isTokenAboutToExpire() || 
hasEnoughTimeElapsedSinceLastRefresh();
+  }
+
+  /**
+   * Checks to see if enough time has elapsed since the last token refresh.
+   *
+   * @return true if the token was last refreshed more than an hour ago.
+   */
+  protected boolean hasEnoughTimeElapsedSinceLastRefresh() {
+if (getTokenFetchTime() == -1) {
+  return true;
+}
+boolean expiring = false;
+long elapsedTimeSinceLastTokenRefreshInMillis =
+System.currentTimeMillis() - getTokenFetchTime();
+// In case token is not refreshed for 1 hr or any clock skew issues,
+// refresh token.
+expiring = elapsedTimeSinceLastTokenRefreshInMillis >= ONE_HOUR
+|| elapsedTimeSinceLastTokenRefreshInMillis < 0;
+if (expiring) {
+  LOG.debug("JWTToken: token renewing. Time elapsed since last token 
fetch:"
+  + " {} milliseconds", elapsedTimeSinceLastTokenRefreshInMillis);
+}
+

[jira] [Commented] (HADOOP-19156) ZooKeeper based state stores use different ZK address configs

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844852#comment-17844852
 ] 

ASF GitHub Bot commented on HADOOP-19156:
-

hadoop-yetus commented on PR #6767:
URL: https://github.com/apache/hadoop/pull/6767#issuecomment-2101905008

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 03s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m 01s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 27s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  93m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  43m 26s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   6m 11s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   4m 48s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6767/5/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |  30m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 208m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 20s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  22m 02s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  38m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  38m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   6m 38s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   4m 47s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6767/5/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |  31m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 222m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   6m 09s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 646m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6767 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | MINGW64_NT-10.0-17763 4260df6a58f1 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 260ff827bf731de6129fe287e0007c1db35f61f1 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6767/5/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-hdfs-project/hadoop-hdfs-rbf U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6767/5/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> ZooKeeper based state stores use different ZK address configs
> -
>
> Key: HADOOP-19156
> URL: https://issues.apache.org/jira/browse/HADOOP-19156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liu bin
>Priority: Major
>  Labels: pull-request-available
>
> Currently, the Zookeeper-based state stores of RM, YARN Federation, and HDFS 

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844848#comment-17844848
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

Hexiaoqiao commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2101893364

   Hi @vikaskr22 @ChenSammi Thanks for your works here. One nit concerns, #6001 
try to remove synchronization then revert and add RWLock here, any issues do 
you meet while remove synchronization? Thanks.




> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrementDelegationTokenSeqNum(ZKDelegationTokenSecretManager.java:601)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:402)
> - locked <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:48)
> at org.apache.hadoop.security.token.Token.(Token.java:67)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.createToken(DelegationTokenManager.java:183)
>  

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844833#comment-17844833
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

ChenSammi commented on code in PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#discussion_r1594856131


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -169,21 +172,29 @@ public AbstractDelegationTokenSecretManager(long 
delegationKeyUpdateInterval,
   public void startThreads() throws IOException {
 Preconditions.checkState(!running);
 updateCurrentKey();
-synchronized (this) {
+this.apiLock.writeLock().lock();
+try {
   running = true;
   tokenRemoverThread = new Daemon(new ExpiredTokenRemover());
   tokenRemoverThread.start();
+}finally {

Review Comment:
   There need be a blank between "}" and finally.  Kindly check other "finally" 
statements too. 
   
   Also please check the checkstyle issues reported by the CI,  
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt.
 





> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDeleg

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844832#comment-17844832
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

ChenSammi commented on code in PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#discussion_r1594856131


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -169,21 +172,29 @@ public AbstractDelegationTokenSecretManager(long 
delegationKeyUpdateInterval,
   public void startThreads() throws IOException {
 Preconditions.checkState(!running);
 updateCurrentKey();
-synchronized (this) {
+this.apiLock.writeLock().lock();
+try {
   running = true;
   tokenRemoverThread = new Daemon(new ExpiredTokenRemover());
   tokenRemoverThread.start();
+}finally {

Review Comment:
   There need be a blank between "}" and finally.  Kindly check other "finally" 
statement. 
   
   Also please check the checkstyle issues reported by the CI,  
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt.
 





> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDeleg

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844831#comment-17844831
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

ChenSammi commented on code in PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#discussion_r1594856131


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -169,21 +172,29 @@ public AbstractDelegationTokenSecretManager(long 
delegationKeyUpdateInterval,
   public void startThreads() throws IOException {
 Preconditions.checkState(!running);
 updateCurrentKey();
-synchronized (this) {
+this.apiLock.writeLock().lock();
+try {
   running = true;
   tokenRemoverThread = new Daemon(new ExpiredTokenRemover());
   tokenRemoverThread.start();
+}finally {

Review Comment:
   There need be a blank between "}" and finally.  Kindly check other "finally" 
statement. 
   
   Also please check the checkstyle issues reported 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt.
 





> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretMana

[jira] [Commented] (HADOOP-19154) upgrade bouncy castle to 1.78.1 due to CVEs

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844816#comment-17844816
 ] 

ASF GitHub Bot commented on HADOOP-19154:
-

hadoop-yetus commented on PR #6755:
URL: https://github.com/apache/hadoop/pull/6755#issuecomment-2101690935

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 00s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m 00s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 00s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shellcheck  |   0m 00s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m 00s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m 01s |  |  markdownlint was not available.  
|
   | +0 :ok: |  xmllint  |   0m 00s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m 00s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 40s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  87m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  38m 08s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |  21m 59s | 
[/branch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6755/5/artifact/out/branch-mvnsite-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  javadoc  |  14m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 308m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  90m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  38m 04s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  38m 04s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | -1 :x: |  mvnsite  |  22m 17s | 
[/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6755/5/artifact/out/patch-mvnsite-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  javadoc  |  15m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 183m 04s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   6m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 639m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6755 |
   | Optional Tests | dupname asflicense codespell detsecrets shellcheck 
shelldocs mvnsite markdownlint compile javac javadoc mvninstall unit 
shadedclient xmllint |
   | uname | MINGW64_NT-10.0-17763 1a4babd1d2a8 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / c751d7db450412dbb5b56efc9e33fe953cf14a23 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6755/5/testReport/
 |
   | modules | C: hadoop-project hadoop-cloud-storage-project/hadoop-cos . U: . 
|
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6755/5/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> upgrade bouncy castle to 1.78.1 due to CVEs
> ---
>
> Key: HADOOP-19154
> URL: https://issues.apache.org/jira/browse/HADOOP-19154
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0, 3.3.6
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> [https://www.bouncycastle.org/releasenotes.html#r1rv78]
> There is a v1.78.1 release but no notes for it yet.
> For v1.78
> h3. 2.1.5 Security Advisories.
> Release 1.78 deals with the following CVEs:
>  * CVE-2024-29857 - Importing an EC certificate with specially crafted F2m 
> parameters can cau

[jira] [Commented] (HADOOP-19137) [ABFS]:Extra getAcl call while calling the very first API of FileSystem

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844812#comment-17844812
 ] 

ASF GitHub Bot commented on HADOOP-19137:
-

hadoop-yetus commented on PR #6752:
URL: https://github.com/apache/hadoop/pull/6752#issuecomment-2101665324

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 02s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  | 108m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 55s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   5m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   6m 00s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   5m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 176m 06s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  | 179m 08s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 54s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 01s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 27s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 191m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   6m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 503m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6752 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 c17f46211cbb 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 255b62dec2fba13f36beefde961d0059a65b0331 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6752/4/testReport/
 |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6752/4/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> [ABFS]:Extra getAcl call while calling the very first API of FileSystem
> ---
>
> Key: HADOOP-19137
> URL: https://issues.apache.org/jira/browse/HADOOP-19137
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Store doesn't flow in the namespace information to the client. 
> In https://github.com/apache/hadoop/pull/6221, getIsNamespaceEnabled is added 
> in client methods which checks if namespace information is there or not, and 
> if not there, it will make getAcl call and set the field. Once the field is 
> set, it would be used in future getIsNamespaceEnabled method calls for a 
> given AbfsClient.
> Since, CPK both global and encryptionContext are only for hns account, the 
> fix that is proposed is that we would fail fs init if its non-hns account and 
> cpk config is given.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---

[jira] [Commented] (HADOOP-19013) fs.getXattrs(path) for S3FS doesn't have x-amz-server-side-encryption-aws-kms-key-id header.

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844809#comment-17844809
 ] 

ASF GitHub Bot commented on HADOOP-19013:
-

hadoop-yetus commented on PR #6646:
URL: https://github.com/apache/hadoop/pull/6646#issuecomment-2101656498

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6646/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 7 new + 2 unchanged - 0 fixed 
= 9 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 54s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 139m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6646/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6646 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux afda44a1d9c2 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cabd13a910c861153a9e699a0a85d504e55d7be7 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6646/2/testReport/ |
   | Max. process+thread count | 559 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6646/2

[jira] [Commented] (HADOOP-19013) fs.getXattrs(path) for S3FS doesn't have x-amz-server-side-encryption-aws-kms-key-id header.

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844784#comment-17844784
 ] 

ASF GitHub Bot commented on HADOOP-19013:
-

mukund-thakur commented on code in PR #6646:
URL: https://github.com/apache/hadoop/pull/6646#discussion_r1594676316


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/EncryptionTestUtils.java:
##
@@ -111,4 +119,27 @@ public static void assertEncrypted(S3AFileSystem fs,
 }
   }
 
+  /**
+   * Assert that a path is encrypted with right encryption settings.
+   * @param fs filesystem.
+   * @param path path
+   * @param algorithm encryption algorithm.
+   * @param kmsKey full kms key if present.
+   * @throws IOException any IOE.
+   */
+  public static void validateEncryptionFileAttributes(S3AFileSystem fs,
+Path path,
+String algorithm,
+Optional kmsKey) 
throws IOException {
+Map xAttrs = fs.getXAttrs(path);
+
Assertions.assertThat(HeaderProcessing.decodeBytes(xAttrs.get(XA_SERVER_SIDE_ENCRYPTION)))

Review Comment:
   seems like extracting is not for primitive type like byte[] here. It is only 
implemented for maps and objects. 





> fs.getXattrs(path) for S3FS doesn't have 
> x-amz-server-side-encryption-aws-kms-key-id header.
> 
>
> Key: HADOOP-19013
> URL: https://issues.apache.org/jira/browse/HADOOP-19013
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>
> Once a path while uploading has been encrypted with SSE-KMS with a key id and 
> then later when we try to read the attributes of the same file, it doesn't 
> contain the key id information as an attribute. should we add it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18786) Hadoop build depends on archives.apache.org

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844718#comment-17844718
 ] 

ASF GitHub Bot commented on HADOOP-18786:
-

ctubbsii commented on PR #5789:
URL: https://github.com/apache/hadoop/pull/5789#issuecomment-2101037371

   This was previously approved, and I've answered all the questions raised. I 
just resolved the merge conflicts from upstream, where some lines got moved 
around in the Dockerfile for Windows. Is anybody willing to merge this?




> Hadoop build depends on archives.apache.org
> ---
>
> Key: HADOOP-18786
> URL: https://issues.apache.org/jira/browse/HADOOP-18786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Christopher Tubbs
>Priority: Critical
>  Labels: pull-request-available
>
> Several times throughout Hadoop's source, the ASF archive is referenced, 
> including part of the build that downloads Yetus.
> Building a release from source should not require access to the ASF archives, 
> as that contributes to end users being subject to throttling and blocking by 
> INFRA, for "abuse" of the archives, even though they are merely building a 
> current ASF release from source. This is particularly problematic for 
> downstream packagers who must build from Hadoop's source, or for CI/CD 
> situations that depend on Hadoop's source, and particularly problematic for 
> those end users behind a NAT gateway, because even if Hadoop's use of the 
> archive is modest, it adds up for multiple users.
> The build should be modified, so that it does not require access to fixed 
> versions in the archives (or should work with the upstream of those dependent 
> projects to publish their releases elsewhere, for routine consumptions). In 
> the interim, the source could be updated to point to the current dependency 
> versions available on downloads.apache.org.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19156) ZooKeeper based state stores use different ZK address configs

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844664#comment-17844664
 ] 

ASF GitHub Bot commented on HADOOP-19156:
-

Hexiaoqiao commented on PR #6767:
URL: https://github.com/apache/hadoop/pull/6767#issuecomment-2100603374

   @liubin101 Almost look good to me. Also need to update ResourceManagerHA.md 
/ Federation.md / ResourceManagerRestart.md about zk config guide. Thanks.




> ZooKeeper based state stores use different ZK address configs
> -
>
> Key: HADOOP-19156
> URL: https://issues.apache.org/jira/browse/HADOOP-19156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liu bin
>Priority: Major
>  Labels: pull-request-available
>
> Currently, the Zookeeper-based state stores of RM, YARN Federation, and HDFS 
> Federation use the same ZK address config `{{{}hadoop.zk.address`{}}}. But in 
> our production environment, we hope that different services can use different 
> ZKs to avoid mutual influence.
> This jira adds separate ZK address configs for each service.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18610) ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844617#comment-17844617
 ] 

ASF GitHub Bot commented on HADOOP-18610:
-

hadoop-yetus commented on PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#issuecomment-2100265963

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 02s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 00s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 00s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 00s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m 00s |  |  markdownlint was not available.  
|
   | +0 :ok: |  yamllint  |   0m 00s |  |  yamllint was not available.  |
   | +1 :green_heart: |  @author  |   0m 01s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  86m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 47s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   4m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 144m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 09s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 09s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 04s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 03s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 149m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   5m 15s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 403m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6787 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
yamllint |
   | uname | MINGW64_NT-10.0-17763 710935166a77 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 20a660a9d8d870f2efaba5d732d04cb3aaa1146b |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6787/1/testReport/
 |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6787/1/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS
> -
>
> Key: HADOOP-18610
> URL: https://issues.apache.org/jira/browse/HADOOP-18610
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.3.4
>Reporter: Haifeng Chen
>Assignee: Anuj Modi
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HADOOP-18610-preview.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In Jan 2023, Microsoft Azure AKS replaced its original pod-managed identity 
> with with [Azure Active Directory (Azure AD) workload 
> identities|https://learn.microsoft.com/en-us/azure/active-directory/develop/workload-identities-overview]
>  (preview), which integrate with the Kubernetes native capabilities to 
> federate with any external identity providers. This approach is simpler to 
> use and deploy.
> Refer to 
> [https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview|https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview.]
>  and [ht

[jira] [Commented] (HADOOP-19167) Change of Codec configuration does not work

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844616#comment-17844616
 ] 

ASF GitHub Bot commented on HADOOP-19167:
-

hadoop-yetus commented on PR #6798:
URL: https://github.com/apache/hadoop/pull/6798#issuecomment-2100264516

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  11m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  11m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  10m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  29m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  18m 30s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6798/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 159m 52s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics |
   |   | hadoop.net.TestSocketIOWithTimeout |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6798/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6798 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ee36b7172d7f 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0c231ab8661c638138d9c402f2dffc493cde1aef |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6798/3/testReport/ |
   | Max. process+thread count | 1269 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844615#comment-17844615
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

hadoop-yetus commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2100256835

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 44s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 13 new + 23 
unchanged - 1 fixed = 36 total (was 24)  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 57s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 139m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6803 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux abe3780c86f4 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4c3f83ae8ab0bbf066008f320acf05223bfa2bbb |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/2/testReport/ |
   | Max. process+thread count | 1272 (vs. ulimit

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844613#comment-17844613
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

vikaskr22 commented on code in PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#discussion_r1593797928


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -441,18 +496,14 @@ private void updateCurrentKey() throws IOException {
 LOG.info("Updating the current master key for generating delegation 
tokens");
 /* Create a new currentKey with an estimated expiry date. */
 int newCurrentId;
-synchronized (this) {
-  newCurrentId = incrementCurrentKeyId();
-}
+newCurrentId = incrementCurrentKeyId();
 DelegationKey newKey = new DelegationKey(newCurrentId, System
 .currentTimeMillis()
 + keyUpdateInterval + tokenMaxLifetime, generateSecret());
 //Log must be invoked outside the lock on 'this'
 logUpdateMasterKey(newKey);
-synchronized (this) {
-  currentKey = newKey;
-  storeDelegationKey(currentKey);
-}
+currentKey = newKey;

Review Comment:
   Thanks. It has been incorporated.





> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844612#comment-17844612
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

vikaskr22 commented on code in PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#discussion_r1593797668


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -169,21 +172,29 @@ public AbstractDelegationTokenSecretManager(long 
delegationKeyUpdateInterval,
   public void startThreads() throws IOException {
 Preconditions.checkState(!running);
 updateCurrentKey();
-synchronized (this) {
+this.apiLock.writeLock().lock();
+try{

Review Comment:
   Thanks. It has been incorporated.





> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrementDelegationTokenSeqNum(ZKDelegationTokenSecretManager.java:601)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:402)
> - locked <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.toke

[jira] [Commented] (HADOOP-19161) S3A: option "fs.s3a.performance.flags" to take list of performance flags

2024-05-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844610#comment-17844610
 ] 

ASF GitHub Bot commented on HADOOP-19161:
-

hadoop-yetus commented on PR #6789:
URL: https://github.com/apache/hadoop/pull/6789#issuecomment-2100244716

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 01s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 00s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 00s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 00s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m 00s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  93m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 19s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   4m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 148m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 00s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 09s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 163m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   5m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 429m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6789 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 82fc63c5dfa5 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / feb384f7a19b454c0517f8cadc4a9f6865afcab4 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6789/1/testReport/
 |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6789/1/console
 |
   | versions | git=2.45.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> S3A: option "fs.s3a.performance.flags" to take list of performance flags
> 
>
> Key: HADOOP-19161
> URL: https://issues.apache.org/jira/browse/HADOOP-19161
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> HADOOP-19072 shows we want to add more optimisations than that of 
> HADOOP-18930.
> * Extending the new optimisations to the existing option is brittle
> * Adding explicit options for each feature gets complext fast.
> Proposed
> * A new class S3APerformanceFlags keeps all the flags
> * it build this from a string[] of values, which can be extracted from 
> getConf(),
> * and it can also support a "*" option to mean "everything"
> * this class can also be handed off to hasPathCapability() and do the right 
> thing.
> Proposed optimisations
> * create file (we will hook up HADOOP-18930)
> * mkdir (HADOOP-19072)
> * delete (probe for parent path)
> * rename (probe for source path)
>

[jira] [Moved] (HADOOP-19167) Change of Codec configuration does not work

2024-05-08 Thread Yiqun Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin moved HDFS-17510 to HADOOP-19167:
---

Component/s: compress
 (was: compress)
Key: HADOOP-19167  (was: HDFS-17510)
Project: Hadoop Common  (was: Hadoop HDFS)

> Change of Codec configuration does not work
> ---
>
> Key: HADOOP-19167
> URL: https://issues.apache.org/jira/browse/HADOOP-19167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>  Labels: pull-request-available
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(sequenceFilePath),
>                                 
> SequenceFile.Writer.keyClass(LongWritable.class),
>                                 
> SequenceFile.Writer.valueClass(BytesWritable.class),
>                                 
> SequenceFile.Writer.compression(CompressionType.BLOCK));
> The reason is SequenceFile.Writer.init() method will call 
> CodecPool.getCompressor(codec, null) to get a compressor. 
> If the compressor is a reused instance, the conf is not applied because it is 
> passed as null:
> public static Compressor getCompressor(CompressionCodec codec, Configuration 
> conf) {
> Compressor compressor = borrow(compressorPool, codec.getCompressorType());
> if (compressor == null)
> { compressor = codec.createCompressor(); LOG.info("Got brand-new compressor 
> ["+codec.getDefaultExtension()+"]"); }
> else {
> compressor.reinit(conf);   //conf is null here
> ..
>  
> Please also refer to my unit test to reproduce the bug. 
> To address this bug, I modified the code to ensure that the configuration is 
> read back from the codec when a compressor is reused.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844534#comment-17844534
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

hadoop-yetus commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2099774430

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 00s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m 00s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  | 112m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  48m 25s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   5m 17s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   5m 00s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/1/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |   5m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 172m 01s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 06s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  44m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  44m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   5m 27s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   5m 04s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   5m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 190m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   7m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 591m 12s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6803 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 94506f5a0dcd 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / cd01f45cfe7f9941b3d082f7d6cc26840edd2e09 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/1/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/1/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844523#comment-17844523
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

ChenSammi commented on code in PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#discussion_r1593369308


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -169,21 +172,29 @@ public AbstractDelegationTokenSecretManager(long 
delegationKeyUpdateInterval,
   public void startThreads() throws IOException {
 Preconditions.checkState(!running);
 updateCurrentKey();
-synchronized (this) {
+this.apiLock.writeLock().lock();
+try{

Review Comment:
   It requires a blank space between try and "{",  "}" and finally. 
   
   Kindly check this checkstyle issue in all new codes. 





> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrementDelegationTokenSeqNum(ZKDelegationTokenSecretManager.java:601)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:402)
> - locked <0x0005f2f545e8> (a 
> org.apache.hadoop.sec

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844522#comment-17844522
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

ChenSammi commented on code in PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#discussion_r1593368398


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -120,12 +121,12 @@ private String formatTokenId(TokenIdent id) {
   /**
* Access to currentKey is protected by this object lock
*/
-  private DelegationKey currentKey;
+  private volatile  DelegationKey currentKey;

Review Comment:
   This volatile key word can be removed, since the operation of currentKey 
will all be protected by lock. 





> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrementDelegationTokenSeqNum(ZKDelegationTokenSecretManager.java:601)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:402)
> - locked <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegati

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844521#comment-17844521
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

ChenSammi commented on code in PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#discussion_r1593339527


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -441,18 +496,14 @@ private void updateCurrentKey() throws IOException {
 LOG.info("Updating the current master key for generating delegation 
tokens");
 /* Create a new currentKey with an estimated expiry date. */
 int newCurrentId;
-synchronized (this) {
-  newCurrentId = incrementCurrentKeyId();
-}
+newCurrentId = incrementCurrentKeyId();
 DelegationKey newKey = new DelegationKey(newCurrentId, System
 .currentTimeMillis()
 + keyUpdateInterval + tokenMaxLifetime, generateSecret());
 //Log must be invoked outside the lock on 'this'
 logUpdateMasterKey(newKey);
-synchronized (this) {
-  currentKey = newKey;
-  storeDelegationKey(currentKey);
-}
+currentKey = newKey;

Review Comment:
   Need to keep this synchronized or use Lock, to protect currentKey change and 
storeKey as one atomic operation. 





> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844520#comment-17844520
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

ChenSammi commented on code in PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#discussion_r1593339527


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -441,18 +496,14 @@ private void updateCurrentKey() throws IOException {
 LOG.info("Updating the current master key for generating delegation 
tokens");
 /* Create a new currentKey with an estimated expiry date. */
 int newCurrentId;
-synchronized (this) {
-  newCurrentId = incrementCurrentKeyId();
-}
+newCurrentId = incrementCurrentKeyId();
 DelegationKey newKey = new DelegationKey(newCurrentId, System
 .currentTimeMillis()
 + keyUpdateInterval + tokenMaxLifetime, generateSecret());
 //Log must be invoked outside the lock on 'this'
 logUpdateMasterKey(newKey);
-synchronized (this) {
-  currentKey = newKey;
-  storeDelegationKey(currentKey);
-}
+currentKey = newKey;

Review Comment:
   Need to keep this synchronized or use Lock, to protect currentKey change and 
storeKey as one atomic operation.





> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.

[jira] [Commented] (HADOOP-19156) ZooKeeper based state stores use different ZK address configs

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844511#comment-17844511
 ] 

ASF GitHub Bot commented on HADOOP-19156:
-

hadoop-yetus commented on PR #6767:
URL: https://github.com/apache/hadoop/pull/6767#issuecomment-2099680632

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/branch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1. 
 |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/buildtool-branch-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/buildtool-branch-checkstyle-root.txt)
 |  The patch fails to run checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 23s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt)
 |  hadoop-yarn-api in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt)
 |  hadoop-yarn-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 23s | 
[/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt)
 |  hadoop-yarn-server-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6767/5/artifact/out/branch-javadoc-hadoop

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844509#comment-17844509
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

ChenSammi commented on code in PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#discussion_r1593339527


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -441,18 +496,14 @@ private void updateCurrentKey() throws IOException {
 LOG.info("Updating the current master key for generating delegation 
tokens");
 /* Create a new currentKey with an estimated expiry date. */
 int newCurrentId;
-synchronized (this) {
-  newCurrentId = incrementCurrentKeyId();
-}
+newCurrentId = incrementCurrentKeyId();
 DelegationKey newKey = new DelegationKey(newCurrentId, System
 .currentTimeMillis()
 + keyUpdateInterval + tokenMaxLifetime, generateSecret());
 //Log must be invoked outside the lock on 'this'
 logUpdateMasterKey(newKey);
-synchronized (this) {
-  currentKey = newKey;
-  storeDelegationKey(currentKey);
-}
+currentKey = newKey;

Review Comment:
   Need to keep this synchronized or use Lock, to protect currentKey change and 
storeKey as one operation.





> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.

[jira] [Commented] (HADOOP-19156) ZooKeeper based state stores use different ZK address configs

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844493#comment-17844493
 ] 

ASF GitHub Bot commented on HADOOP-19156:
-

liubin101 commented on code in PR #6767:
URL: https://github.com/apache/hadoop/pull/6767#discussion_r1593306172


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java:
##
@@ -135,29 +135,51 @@ public void start() throws IOException{
* @param authInfos List of authentication keys.
* @throws IOException If the connection cannot be started.
*/
-  public void start(List authInfos) throws IOException {
+  public void start(List authInfos) throws IOException{

Review Comment:
   Thank you very much for your help, I will fix it.





> ZooKeeper based state stores use different ZK address configs
> -
>
> Key: HADOOP-19156
> URL: https://issues.apache.org/jira/browse/HADOOP-19156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liu bin
>Priority: Major
>  Labels: pull-request-available
>
> Currently, the Zookeeper-based state stores of RM, YARN Federation, and HDFS 
> Federation use the same ZK address config `{{{}hadoop.zk.address`{}}}. But in 
> our production environment, we hope that different services can use different 
> ZKs to avoid mutual influence.
> This jira adds separate ZK address configs for each service.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19166) [DOC] Drop Migrating from Apache Hadoop 1.x to Apache Hadoop 2.x

2024-05-07 Thread Ayush Saxena (Jira)
Ayush Saxena created HADOOP-19166:
-

 Summary: [DOC] Drop Migrating from Apache Hadoop 1.x to Apache 
Hadoop 2.x
 Key: HADOOP-19166
 URL: https://issues.apache.org/jira/browse/HADOOP-19166
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ayush Saxena


Reading the docs, found this page, which is pretty irrelevant in current 
context or upcoming 3.x releases, can explore dropping it

https://apache.github.io/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19165) Explore dropping protobuf 2.5.0 from the distro

2024-05-07 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-19165:
--
Description: 
explore if protobuf-2.5.0 can be dropped from distro, it is a transitive 
dependency from HBase, but HBase doesn't use it in the code.

Check if it is the only one pulling it into the distro & will something break 
if we exclude that, if none lets get rid of it

  was:
explore if protobuf-2.5.0 can be dropped from distro, it is a transitive 
dependency from HBase, but HBase doesn't use it in the code.

Check if it is the only one pulling it into the distro & if things break we 
exclude that, if none let get rid of it


> Explore dropping protobuf 2.5.0 from the distro
> ---
>
> Key: HADOOP-19165
> URL: https://issues.apache.org/jira/browse/HADOOP-19165
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Priority: Major
>
> explore if protobuf-2.5.0 can be dropped from distro, it is a transitive 
> dependency from HBase, but HBase doesn't use it in the code.
> Check if it is the only one pulling it into the distro & will something break 
> if we exclude that, if none lets get rid of it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19165) Explore dropping protobuf 2.5.0 from the distro

2024-05-07 Thread Ayush Saxena (Jira)
Ayush Saxena created HADOOP-19165:
-

 Summary: Explore dropping protobuf 2.5.0 from the distro
 Key: HADOOP-19165
 URL: https://issues.apache.org/jira/browse/HADOOP-19165
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ayush Saxena


explore if protobuf-2.5.0 can be dropped from distro, it is a transitive 
dependency from HBase, but HBase doesn't use it in the code.

Check if it is the only one pulling it into the distro & if things break we 
exclude that, if none let get rid of it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18679) Add API for bulk/paged object deletion

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844403#comment-17844403
 ] 

ASF GitHub Bot commented on HADOOP-18679:
-

steveloughran commented on code in PR #6726:
URL: https://github.com/apache/hadoop/pull/6726#discussion_r1592834729


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java:
##
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.wrappedio;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.BulkDelete;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Reflection-friendly access to APIs which are not available in
+ * some of the older Hadoop versions which libraries still
+ * compile against.
+ * 
+ * The intent is to avoid the need for complex reflection operations
+ * including wrapping of parameter classes, direct instatiation of
+ * new classes etc.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public final class WrappedIO {
+
+  private WrappedIO() {
+  }
+
+  /**
+   * Get the maximum number of objects/files to delete in a single request.
+   * @param fs filesystem
+   * @param path path to delete under.
+   * @return a number greater than or equal to zero.
+   * @throws UnsupportedOperationException bulk delete under that path is not 
supported.
+   * @throws IllegalArgumentException path not valid.
+   * @throws IOException problems resolving paths
+   */
+  public static int bulkDeletePageSize(FileSystem fs, Path path) throws 
IOException {
+try (BulkDelete bulk = fs.createBulkDelete(path)) {
+  return bulk.pageSize();
+}
+  }
+
+  /**
+   * Delete a list of files/objects.
+   * 
+   *   Files must be under the path provided in {@code base}.
+   *   The size of the list must be equal to or less than the page 
size.
+   *   Directories are not supported; the outcome of attempting to delete
+   *   directories is undefined (ignored; undetected, listed as 
failures...).
+   *   The operation is not atomic.
+   *   The operation is treated as idempotent: network failures may
+   *trigger resubmission of the request -any new objects created under 
a
+   *path in the list may then be deleted.
+   *There is no guarantee that any parent directories exist after this 
call.
+   *
+   * 
+   * @param fs filesystem
+   * @param base path to delete under.
+   * @param paths list of paths which must be absolute and under the base path.
+   * @return a list of all the paths which couldn't be deleted for a reason 
other than "not found" and any associated error message.
+   * @throws UnsupportedOperationException bulk delete under that path is not 
supported.
+   * @throws IOException IO problems including networking, authentication and 
more.
+   * @throws IllegalArgumentException if a path argument is invalid.
+   */
+  public static List> bulkDelete(FileSystem fs,

Review Comment:
   rename bulkDelete_delete



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/wrappedio/WrappedIO.java:
##
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the Li

[jira] [Commented] (HADOOP-19161) S3A: option "fs.s3a.performance.flags" to take list of performance flags

2024-05-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844398#comment-17844398
 ] 

Steve Loughran commented on HADOOP-19161:
-

HADOOP-18544 is the delete optimisation. we know this is very brittle so could 
maybe split

delete-no-parent-recreate

so distinguish from any future delete optimisations, such as skipping a LIST 
for delete(recursive=false)

> S3A: option "fs.s3a.performance.flags" to take list of performance flags
> 
>
> Key: HADOOP-19161
> URL: https://issues.apache.org/jira/browse/HADOOP-19161
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> HADOOP-19072 shows we want to add more optimisations than that of 
> HADOOP-18930.
> * Extending the new optimisations to the existing option is brittle
> * Adding explicit options for each feature gets complext fast.
> Proposed
> * A new class S3APerformanceFlags keeps all the flags
> * it build this from a string[] of values, which can be extracted from 
> getConf(),
> * and it can also support a "*" option to mean "everything"
> * this class can also be handed off to hasPathCapability() and do the right 
> thing.
> Proposed optimisations
> * create file (we will hook up HADOOP-18930)
> * mkdir (HADOOP-19072)
> * delete (probe for parent path)
> * rename (probe for source path)
> We could think of more, with different names, later.
> The goal is make it possible to strip out every HTTP request we do for 
> safety/posix compliance, so applications have the option of turning off what 
> they don't need.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844282#comment-17844282
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

hadoop-yetus commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2098357532

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 39s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 13 new + 23 
unchanged - 1 fixed = 36 total (was 24)  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 52s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 139m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6803 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 963820cfbb47 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cd01f45cfe7f9941b3d082f7d6cc26840edd2e09 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/1/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844244#comment-17844244
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

vikaskr22 commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2098067781

   @ChenSammi , Can you review the changes and provide your input ? Thanks.




> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrementDelegationTokenSeqNum(ZKDelegationTokenSecretManager.java:601)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:402)
> - locked <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:48)
> at org.apache.hadoop.security.token.Token.(Token.java:67)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.createToken(DelegationTokenManager.java:183)
>  {code}
> We can say that this thread is slow and has blocked remaining all. But 
> following is my observation:
>  
>

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844239#comment-17844239
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

vikaskr22 opened a new pull request, #6803:
URL: https://github.com/apache/hadoop/pull/6803

   
   
   ### Description of PR
   AbstractDelegationTokenSecretManager's method all synchronized and are 
blocking each other, even multiple readers threads are blocking each other. 
This PR is an efforts towards optimising the synchronization contexts.
   For detailed description, please go through the discussion on 
https://issues.apache.org/jira/browse/HADOOP-18851
   
   ### How was this patch tested?
   Build is working fine. And this is more about a logical change like where to 
acquire or release a lock. It requires careful review. There is not any 
functional logic change.
   
   ### For code changes:
   
   - [ Y] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [N ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org

[jira] [Commented] (HADOOP-18958) UserGroupInformation debug log improve

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844152#comment-17844152
 ] 

ASF GitHub Bot commented on HADOOP-18958:
-

hadoop-yetus commented on PR #6255:
URL: https://github.com/apache/hadoop/pull/6255#issuecomment-2097535663

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  17m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 16s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 243m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6255/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9032ae8deda5 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 090058953780461aff5f7f604d9af538e37b355b |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6255/5/testReport/ |
   | Max. process+thread count | 1237 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6255/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> UserGroupInformat

[jira] [Reopened] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-07 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen reopened HADOOP-18851:
-

Revert the previous commit which removes the synchronized keywords. Will have a 
new implementation using ReentrantReadWriteLock. 

> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrementDelegationTokenSeqNum(ZKDelegationTokenSecretManager.java:601)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:402)
> - locked <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:48)
> at org.apache.hadoop.security.token.Token.(Token.java:67)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.createToken(DelegationTokenManager.java:183)
>  {code}
> We can say that this thread is slow and has blocked remaining all. But 
> following is my observation:
>  
>  # verifyToken() and createPaswword() has been synchronized because one is 
> reading the tokenMap

[jira] [Comment Edited] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-07 Thread Sammi Chen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844149#comment-17844149
 ] 

Sammi Chen edited comment on HADOOP-18851 at 5/7/24 6:12 AM:
-

Reverted the previous commit which removes the synchronized keywords. Will have 
a new implementation using ReentrantReadWriteLock. 


was (Author: sammi):
Revert the previous commit which removes the synchronized keywords. Will have a 
new implementation using ReentrantReadWriteLock. 

> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrementDelegationTokenSeqNum(ZKDelegationTokenSecretManager.java:601)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:402)
> - locked <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:48)
> at org.apache.hadoop.security.token.Token.(Token.java:67)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.createToken(DelegationTokenManager.java:

[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844146#comment-17844146
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

hadoop-yetus commented on PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#issuecomment-2097509435

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 24s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 10s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 128m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6552/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6552 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 300cd269972c 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 820e2e0d0c98b3441a71f0101915990911052959 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6552/16/testReport/ |
   | Max. process+thread count | 705 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci

[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844108#comment-17844108
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anujmodi2021 commented on PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#issuecomment-2097394199

   > all of which are trivial
   
   Done.
   Thanks




> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18679) Add API for bulk/paged object deletion

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844057#comment-17844057
 ] 

ASF GitHub Bot commented on HADOOP-18679:
-

mukund-thakur commented on PR #6726:
URL: https://github.com/apache/hadoop/pull/6726#issuecomment-2097047455

   > can you do the same here? some style checker will complain but it will 
help us to separate the methods in the new class.
   
   I don't understand what to do here. 
   




> Add API for bulk/paged object deletion
> --
>
> Key: HADOOP-18679
> URL: https://issues.apache.org/jira/browse/HADOOP-18679
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> iceberg and hbase could benefit from being able to give a list of individual 
> files to delete -files which may be scattered round the bucket for better 
> read peformance. 
> Add some new optional interface for an object store which allows a caller to 
> submit a list of paths to files to delete, where
> the expectation is
> * if a path is a file: delete
> * if a path is a dir, outcome undefined
> For s3 that'd let us build these into DeleteRequest objects, and submit, 
> without any probes first.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18610) ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843877#comment-17843877
 ] 

ASF GitHub Bot commented on HADOOP-18610:
-

mukund-thakur commented on code in PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#discussion_r1591518494


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/WorkloadIdentityTokenProvider.java:
##
@@ -0,0 +1,144 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.oauth2;
+
+import java.io.File;
+import java.io.IOException;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.thirdparty.com.google.common.base.Strings;
+import org.apache.hadoop.util.Preconditions;
+
+/**
+ * Provides tokens based on Azure AD Workload Identity.
+ */
+public class WorkloadIdentityTokenProvider extends AccessTokenProvider {
+
+  private static final String OAUTH2_TOKEN_PATH = "/oauth2/v2.0/token";
+  private static final long ONE_HOUR = 3600 * 1000;
+  private static final Logger LOG = 
LoggerFactory.getLogger(AccessTokenProvider.class);
+
+  private final String authEndpoint;
+  private final String clientId;
+  private final String tokenFile;
+  private long tokenFetchTime = -1;
+
+  public WorkloadIdentityTokenProvider(final String authority, final String 
tenantId,
+  final String clientId, final String tokenFile) {
+Preconditions.checkNotNull(authority, "authority");
+Preconditions.checkNotNull(tenantId, "tenantId");
+Preconditions.checkNotNull(clientId, "clientId");
+Preconditions.checkNotNull(tokenFile, "tokenFile");
+
+this.authEndpoint = authority + tenantId + OAUTH2_TOKEN_PATH;
+this.clientId = clientId;
+this.tokenFile = tokenFile;
+  }
+
+  @Override
+  protected AzureADToken refreshToken() throws IOException {
+LOG.debug("AADToken: refreshing token from JWT Assertion");
+String clientAssertion = getClientAssertion();
+AzureADToken token = getTokenUsingJWTAssertion(clientAssertion);
+tokenFetchTime = System.currentTimeMillis();
+return token;
+  }
+
+  /**
+   * Gets the Azure AD token from a client assertion in JWT format.
+   * This method exists to make unit testing possible.
+   *
+   * @param clientAssertion the client assertion.
+   * @return the Azure AD token.
+   * @throws IOException if there is a failure in connecting to Azure AD.
+   */
+  @VisibleForTesting
+  AzureADToken getTokenUsingJWTAssertion(String clientAssertion) throws 
IOException {
+return AzureADAuthenticator
+.getTokenUsingJWTAssertion(authEndpoint, clientId, clientAssertion);
+  }
+
+  /**
+   * Checks if the token is about to expire as per base expiry logic.
+   * Otherwise, try to expire if enough time has elapsed since the last 
refresh.
+   *
+   * @return true if the token is expiring in next 1 hour or if a token has
+   * never been fetched
+   */
+  @Override
+  protected boolean isTokenAboutToExpire() {
+return super.isTokenAboutToExpire() || 
hasEnoughTimeElapsedSinceLastRefresh();
+  }
+
+  /**
+   * Checks to see if enough time has elapsed since the last token refresh.
+   *
+   * @return true if the token was last refreshed more than an hour ago.
+   */
+  protected boolean hasEnoughTimeElapsedSinceLastRefresh() {
+if (getTokenFetchTime() == -1) {
+  return true;
+}
+boolean expiring = false;
+long elapsedTimeSinceLastTokenRefreshInMillis =
+System.currentTimeMillis() - getTokenFetchTime();
+// In case token is not refreshed for 1 hr or any clock skew issues,
+// refresh token.
+expiring = elapsedTimeSinceLastTokenRefreshInMillis >= ONE_HOUR

Review Comment:
   Any specific reason for choosing 1 hour refresh interval?



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/WorkloadIdentityTokenProvider.java:
##
@@ -0,0 +1,144 @@
+/**
+ * Licensed to the Apache S

[jira] [Commented] (HADOOP-18958) UserGroupInformation debug log improve

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843831#comment-17843831
 ] 

ASF GitHub Bot commented on HADOOP-18958:
-

hadoop-yetus commented on PR #6255:
URL: https://github.com/apache/hadoop/pull/6255#issuecomment-209954

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  42m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  19m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  19m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  3s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6255/4/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  
hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  42m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 10s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 253m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6255/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9429606c9a88 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ac6b742847623e6701c570cafeb5d672c5f14009 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6255/4/testReport/ |
   | Max. process+thread count | 1234 (vs

[jira] [Commented] (HADOOP-18958) UserGroupInformation debug log improve

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843758#comment-17843758
 ] 

ASF GitHub Bot commented on HADOOP-18958:
-

hiwangzhihui commented on PR #6255:
URL: https://github.com/apache/hadoop/pull/6255#issuecomment-2096162093

   This is  the formatted  log.
   
![1715005316907](https://github.com/apache/hadoop/assets/146961954/cb0a3070-45ea-4605-99f9-f21db35b71ea)
   
   




> UserGroupInformation debug log improve
> --
>
> Key: HADOOP-18958
> URL: https://issues.apache.org/jira/browse/HADOOP-18958
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0, 3.3.5
>Reporter: wangzhihui
>Priority: Minor
>  Labels: pull-request-available
> Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, 
> 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, 
> image-2023-10-30-14-35-11-161.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>       Using “new Exception( )” to print the call stack of "doAs Method " in 
> the UserGroupInformation class. Using this way will print meaningless 
> Exception information and too many call stacks, This is not conducive to 
> troubleshooting
> *example:*
> !20231029-122825.jpeg|width=991,height=548!
>  
> *improved result* :
>  
> !image-2023-10-29-09-47-56-489.png|width=1099,height=156!
> !20231030-143525.jpeg|width=572,height=674!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18508) support multiple s3a integration test runs on same bucket in parallel

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843731#comment-17843731
 ] 

ASF GitHub Bot commented on HADOOP-18508:
-

steveloughran commented on PR #5081:
URL: https://github.com/apache/hadoop/pull/5081#issuecomment-2095930553

   checkstyles where I changed the base contract tests
   ```
   
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFSMainOperationsLocalFileSystem.java:25:import
 org.junit.Assert;:8: Unused import - org.junit.Assert. [UnusedImports]
   
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFSMainOperationsLocalFileSystem.java:27:import
 org.junit.Test;:8: Unused import - org.junit.Test. [UnusedImports]
   
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestFSMainOperationsLocalFileSystem.java:22:import
 java.io.IOException;:8: Unused import - java.io.IOException. [UnusedImports]
   
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestFSMainOperationsLocalFileSystem.java:27:import
 org.apache.hadoop.fs.Path;:8: Unused import - org.apache.hadoop.fs.Path. 
[UnusedImports]
   
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestFSMainOperationsLocalFileSystem.java:29:import
 org.junit.Assert;:8: Unused import - org.junit.Assert. [UnusedImports]
   
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestFSMainOperationsLocalFileSystem.java:31:import
 org.junit.Test;:8: Unused import - org.junit.Test. [UnusedImports]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFSMainOperations.java:24:import
 org.junit.Assert;:8: Unused import - org.junit.Assert. [UnusedImports]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFSMainOperations.java:26:import
 org.junit.Test;:8: Unused import - org.junit.Test. [UnusedImports]
   ```
   




> support multiple s3a integration test runs on same bucket in parallel
> -
>
> Key: HADOOP-18508
> URL: https://issues.apache.org/jira/browse/HADOOP-18508
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.9
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> to have (internal, sorry) jenkins test runs work in parallel, they need to 
> share the same bucket so
> # must have a prefix for job id which is passed in to the path used for forks
> # support disabling root tests so they don't stamp on each other



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19163) Upgrade protobuf version to 3.24.4

2024-05-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843730#comment-17843730
 ] 

Steve Loughran commented on HADOOP-19163:
-

i'm on vacation this week.

> Upgrade protobuf version to 3.24.4
> --
>
> Key: HADOOP-19163
> URL: https://issues.apache.org/jira/browse/HADOOP-19163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-thirdparty
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843694#comment-17843694
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

hadoop-yetus commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2095588680

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  18m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  39m 46s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  18m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 45s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 56s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 38s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 290m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6699/59/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6699 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux cc9ea136f2ea 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 20f4f2b3efb1e27b37011f08dfdb0959d9626f10 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci

[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843653#comment-17843653
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

hadoop-yetus commented on PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#issuecomment-2095321708

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 52s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6552/15/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 3 new + 9 unchanged - 0 
fixed = 12 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  9s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 129m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6552/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6552 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 24be86ac3909 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ecd151a7fac10629fb93f8f25b3a8567caf172ac |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https

[jira] [Commented] (HADOOP-19137) [ABFS]:Extra getAcl call while calling the very first API of FileSystem

2024-05-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843646#comment-17843646
 ] 

ASF GitHub Bot commented on HADOOP-19137:
-

hadoop-yetus commented on PR #6752:
URL: https://github.com/apache/hadoop/pull/6752#issuecomment-2095271620

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 20s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 46s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  83m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6752/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6752 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5ea9568d47aa 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 255b62dec2fba13f36beefde961d0059a65b0331 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6752/11/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6752/11/console |
   | versions | git=2.25.1 maven=3.6.3

[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843640#comment-17843640
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anujmodi2021 commented on PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#issuecomment-2095234333

   > I'm afraid you are very unlucky in that I have been staring at the ABFS 
auth code for diagnostics purposes today and so these lines of code are now 
very familiar.
   > 
   > 1. getPasswordString/getTrimmedPasswordString() must be used for lookup. 
that way you can keep the secret in a JCEKS file.
   > 2. proposed a way to avoid double wrapping exceptions
   
   Thanks for the pointers here.
   Please let me know if changes made looks good.
   




> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843637#comment-17843637
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anujmodi2021 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1590561513


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -980,33 +981,59 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
-  Class sasTokenProviderClass =
-  getTokenProviderClass(authType, configKey, null,
-  SASTokenProvider.class);
-
-  Preconditions.checkArgument(sasTokenProviderClass != null,
-  String.format("The configuration value for \"%s\" is invalid.", 
configKey));
-
-  SASTokenProvider sasTokenProvider = ReflectionUtils
-  .newInstance(sasTokenProviderClass, rawConfig);
-  Preconditions.checkArgument(sasTokenProvider != null,
-  String.format("Failed to initialize %s", sasTokenProviderClass));
-
-  LOG.trace("Initializing {}", sasTokenProviderClass.getName());
-  sasTokenProvider.initialize(rawConfig, accountName);
-  LOG.trace("{} init complete", sasTokenProviderClass.getName());
-  return sasTokenProvider;
+  Class customSasTokenProviderImplementation =
+  getTokenProviderClass(authType, FS_AZURE_SAS_TOKEN_PROVIDER_TYPE,
+  null, SASTokenProvider.class);
+  String configuredFixedToken = this.getString(FS_AZURE_SAS_FIXED_TOKEN, 
null);
+
+  Preconditions.checkArgument(
+  customSasTokenProviderImplementation != null || configuredFixedToken 
!= null,
+  "At least one of the \"%s\" and \"%s\" must be set.",
+  FS_AZURE_SAS_TOKEN_PROVIDER_TYPE, FS_AZURE_SAS_FIXED_TOKEN);
+
+  // Prefer Custom SASTokenProvider Implementation if configured.
+  if (customSasTokenProviderImplementation != null) {
+LOG.trace("Using Custom SASTokenProvider implementation because it is 
given precedence when it is set.");
+SASTokenProvider sasTokenProvider = ReflectionUtils.newInstance(
+customSasTokenProviderImplementation, rawConfig);
+Preconditions.checkArgument(sasTokenProvider != null,

Review Comment:
   Yes. 
   I realized we do have a dedicated exception type for SAS Token related 
issues. 
   Using SASTokenProviderException instead.





> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> ----
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS T

[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843638#comment-17843638
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anujmodi2021 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1590561673


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -980,33 +981,59 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
-  Class sasTokenProviderClass =
-  getTokenProviderClass(authType, configKey, null,
-  SASTokenProvider.class);
-
-  Preconditions.checkArgument(sasTokenProviderClass != null,
-  String.format("The configuration value for \"%s\" is invalid.", 
configKey));
-
-  SASTokenProvider sasTokenProvider = ReflectionUtils
-  .newInstance(sasTokenProviderClass, rawConfig);
-  Preconditions.checkArgument(sasTokenProvider != null,
-  String.format("Failed to initialize %s", sasTokenProviderClass));
-
-  LOG.trace("Initializing {}", sasTokenProviderClass.getName());
-  sasTokenProvider.initialize(rawConfig, accountName);
-  LOG.trace("{} init complete", sasTokenProviderClass.getName());
-  return sasTokenProvider;
+  Class customSasTokenProviderImplementation =
+  getTokenProviderClass(authType, FS_AZURE_SAS_TOKEN_PROVIDER_TYPE,
+  null, SASTokenProvider.class);
+  String configuredFixedToken = this.getString(FS_AZURE_SAS_FIXED_TOKEN, 
null);
+
+  Preconditions.checkArgument(
+  customSasTokenProviderImplementation != null || configuredFixedToken 
!= null,
+  "At least one of the \"%s\" and \"%s\" must be set.",
+  FS_AZURE_SAS_TOKEN_PROVIDER_TYPE, FS_AZURE_SAS_FIXED_TOKEN);
+
+  // Prefer Custom SASTokenProvider Implementation if configured.
+  if (customSasTokenProviderImplementation != null) {
+LOG.trace("Using Custom SASTokenProvider implementation because it is 
given precedence when it is set.");
+SASTokenProvider sasTokenProvider = ReflectionUtils.newInstance(
+customSasTokenProviderImplementation, rawConfig);
+Preconditions.checkArgument(sasTokenProvider != null,
+"Failed to initialize %s", customSasTokenProviderImplementation);
+
+LOG.trace("Initializing {}", 
customSasTokenProviderImplementation.getName());
+sasTokenProvider.initialize(rawConfig, accountName);
+LOG.trace("{} init complete", 
customSasTokenProviderImplementation.getName());
+return sasTokenProvider;
+  } else {
+LOG.trace("Using FixedSASTokenProvider implementation");
+FixedSASTokenProvider fixedSASTokenProvider = new 
FixedSASTokenProvider(configuredFixedToken);
+return fixedSASTokenProvider;
+  }
 } catch (Exception e) {

Review Comment:
   Taken but with SasTokenProviderException



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/FixedSASTokenProvider.java:
##
@@ -0,0 +1,63 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license 

[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843636#comment-17843636
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anujmodi2021 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1590561164


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -980,33 +981,59 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
-  Class sasTokenProviderClass =
-  getTokenProviderClass(authType, configKey, null,
-  SASTokenProvider.class);
-
-  Preconditions.checkArgument(sasTokenProviderClass != null,
-  String.format("The configuration value for \"%s\" is invalid.", 
configKey));
-
-  SASTokenProvider sasTokenProvider = ReflectionUtils
-  .newInstance(sasTokenProviderClass, rawConfig);
-  Preconditions.checkArgument(sasTokenProvider != null,
-  String.format("Failed to initialize %s", sasTokenProviderClass));
-
-  LOG.trace("Initializing {}", sasTokenProviderClass.getName());
-  sasTokenProvider.initialize(rawConfig, accountName);
-  LOG.trace("{} init complete", sasTokenProviderClass.getName());
-  return sasTokenProvider;
+  Class customSasTokenProviderImplementation =
+  getTokenProviderClass(authType, FS_AZURE_SAS_TOKEN_PROVIDER_TYPE,
+  null, SASTokenProvider.class);
+  String configuredFixedToken = this.getString(FS_AZURE_SAS_FIXED_TOKEN, 
null);

Review Comment:
   makes sense.
   Taken





> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> ----
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It m

[jira] [Commented] (HADOOP-18610) ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS

2024-05-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843634#comment-17843634
 ] 

ASF GitHub Bot commented on HADOOP-18610:
-

anujmodi2021 commented on PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#issuecomment-2095232451

   Hi @steveloughran @mukund-thakur 
   Kindly requesting you to review this PR.




> ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS
> -
>
> Key: HADOOP-18610
> URL: https://issues.apache.org/jira/browse/HADOOP-18610
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.3.4
>Reporter: Haifeng Chen
>Assignee: Anuj Modi
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HADOOP-18610-preview.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In Jan 2023, Microsoft Azure AKS replaced its original pod-managed identity 
> with with [Azure Active Directory (Azure AD) workload 
> identities|https://learn.microsoft.com/en-us/azure/active-directory/develop/workload-identities-overview]
>  (preview), which integrate with the Kubernetes native capabilities to 
> federate with any external identity providers. This approach is simpler to 
> use and deploy.
> Refer to 
> [https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview|https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview.]
>  and [https://azure.github.io/azure-workload-identity/docs/introduction.html] 
> for more details.
> The basic use scenario is to access Azure cloud resources (such as cloud 
> storage) from Kubernetes (such as AKS) workload using Azure managed identity 
> federated with Kubernetes service account. The credential environment 
> variables in pod projected by Azure AD workload identity are like following:
> AZURE_AUTHORITY_HOST: (Injected by the webhook, 
> [https://login.microsoftonline.com/])
> AZURE_CLIENT_ID: (Injected by the webhook)
> AZURE_TENANT_ID: (Injected by the webhook)
> AZURE_FEDERATED_TOKEN_FILE: (Injected by the webhook, 
> /var/run/secrets/azure/tokens/azure-identity-token)
> The token in the file pointed by AZURE_FEDERATED_TOKEN_FILE is a JWT (JASON 
> Web Token) client assertion token which we can use to request to 
> AZURE_AUTHORITY_HOST (url is  AZURE_AUTHORITY_HOST + tenantId + 
> "/oauth2/v2.0/token")  for a AD token which can be used to directly access 
> the Azure cloud resources.
> This approach is very common and similar among cloud providers such as AWS 
> and GCP. Hadoop AWS integration has WebIdentityTokenCredentialProvider to 
> handle the same case.
> The existing MsiTokenProvider can only handle the managed identity associated 
> with Azure VM instance. We need to implement a WorkloadIdentityTokenProvider 
> which handle Azure Workload Identity case. For this, we need to add one 
> method (getTokenUsingJWTAssertion) in AzureADAuthenticator which will be used 
> by WorkloadIdentityTokenProvider.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19163) Upgrade protobuf version to 3.24.4

2024-05-05 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843630#comment-17843630
 ] 

Bilwa S T commented on HADOOP-19163:


cc [~ayushsaxena] [~ste...@apache.org]

> Upgrade protobuf version to 3.24.4
> --
>
> Key: HADOOP-19163
> URL: https://issues.apache.org/jira/browse/HADOOP-19163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-thirdparty
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-05-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843626#comment-17843626
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

saxenapranav commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2095196728

   Hi @steveloughran , @mukund-thakur , @mehakmeet , requesting you to kindly 
review please. Thanks!




> [ABFS]: No GetPathStatus call for opening AbfsInputStream
> -
>
> Key: HADOOP-19139
> URL: https://issues.apache.org/jira/browse/HADOOP-19139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Read API gives contentLen and etag of the path. This information would be 
> used in future calls on that inputStream. Prior information of eTag is of not 
> much importance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843621#comment-17843621
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anujmodi2021 commented on PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#issuecomment-2095179355

   > +1 pending that change to abfs.md that @anmolanmol1234 spotted
   
   Which change are you referring to??
   I can see a comment regarding the typo which anmol spotted and already fixed 
that.
   Am I missing any other comment?
   Thanks




> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-05-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843608#comment-17843608
 ] 

ASF GitHub Bot commented on HADOOP-19073:
-

xuzifu666 commented on PR #6534:
URL: https://github.com/apache/hadoop/pull/6534#issuecomment-2095114022

   > let me try and test this myself.
   
   OK,Thanks,currently it can be work now? @steveloughran 




> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19163) Upgrade protobuf version to 3.24.4

2024-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19163:

Labels: pull-request-available  (was: )

> Upgrade protobuf version to 3.24.4
> --
>
> Key: HADOOP-19163
> URL: https://issues.apache.org/jira/browse/HADOOP-19163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-thirdparty
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19164) Hadoop CLI MiniCluster is broken

2024-05-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843489#comment-17843489
 ] 

Steve Loughran commented on HADOOP-19164:
-

* be nice to cut the need for mockito out
* and we should be able to add a test for this in 
https://github.com/apache/hadoop-release-support  ... something which jsut 
tries to issue the command in the unzipped distro dir

and yes "NOTE: You will need protoc 2.5.0 installed." is out. maybe we should 
grep the docs for "protobuf"

> Hadoop CLI MiniCluster is broken
> 
>
> Key: HADOOP-19164
> URL: https://issues.apache.org/jira/browse/HADOOP-19164
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Priority: Major
>
> Documentation is also broken & it doesn't work either
> (https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/CLIMiniCluster.html)
> *Fails with:*
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.isNameNodeUp(MiniDFSCluster.java:2666)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.isClusterUp(MiniDFSCluster.java:2680)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1510)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:989)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:588)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:530)
>   at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:160)
>   at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:132)
>   at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:320)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
>   ... 9 more{noformat}
> {*}Command executed:{*}
> {noformat}
> bin/mapred minicluster -format{noformat}
> *Documentation Issues:*
> {noformat}
> bin/mapred minicluster -rmport RM_PORT -jhsport JHS_PORT{noformat}
> Without -format option it doesn't work the first time telling Namenode isn't 
> formatted, So, this should be corrected.
> {noformat}
> 2024-05-04 00:35:52,933 WARN namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: NameNode is not formatted.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253)
> {noformat}
> This isn't required either:
> {noformat}
> NOTE: You will need protoc 2.5.0 installed.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19164) Hadoop CLI MiniCluster is broken

2024-05-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843362#comment-17843362
 ] 

Ayush Saxena commented on HADOOP-19164:
---

Looks pretty much same as HDFS-16050, need to add mockito dependency I believe. 

> Hadoop CLI MiniCluster is broken
> 
>
> Key: HADOOP-19164
> URL: https://issues.apache.org/jira/browse/HADOOP-19164
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Priority: Major
>
> Documentation is also broken & it doesn't work either
> (https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/CLIMiniCluster.html)
> *Fails with:*
> {noformat}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.isNameNodeUp(MiniDFSCluster.java:2666)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.isClusterUp(MiniDFSCluster.java:2680)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1510)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:989)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:588)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:530)
>   at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:160)
>   at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:132)
>   at 
> org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:320)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
>   ... 9 more{noformat}
> {*}Command executed:{*}
> {noformat}
> bin/mapred minicluster -format{noformat}
> *Documentation Issues:*
> {noformat}
> bin/mapred minicluster -rmport RM_PORT -jhsport JHS_PORT{noformat}
> Without -format option it doesn't work the first time telling Namenode isn't 
> formatted, So, this should be corrected.
> {noformat}
> 2024-05-04 00:35:52,933 WARN namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: NameNode is not formatted.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253)
> {noformat}
> This isn't required either:
> {noformat}
> NOTE: You will need protoc 2.5.0 installed.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19164) Hadoop CLI MiniCluster is broken

2024-05-03 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-19164:
--
Description: 
Documentation is also broken & it doesn't work either

(https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/CLIMiniCluster.html)

*Fails with:*
{noformat}
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/mockito/stubbing/Answer
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isNameNodeUp(MiniDFSCluster.java:2666)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isClusterUp(MiniDFSCluster.java:2680)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1510)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:989)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:588)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:530)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:160)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:132)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:320)
Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 9 more{noformat}
{*}Command executed:{*}
{noformat}
bin/mapred minicluster -format{noformat}
*Documentation Issues:*
{noformat}
bin/mapred minicluster -rmport RM_PORT -jhsport JHS_PORT{noformat}

Without -format option it doesn't work the first time telling Namenode isn't 
formatted, So, this should be corrected.


{noformat}
2024-05-04 00:35:52,933 WARN namenode.FSNamesystem: Encountered exception 
loading fsimage
java.io.IOException: NameNode is not formatted.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253)
{noformat}

This isn't required either:

{noformat}
NOTE: You will need protoc 2.5.0 installed.
{noformat}


  was:
Documentation is also broken & it doesn't work either

(https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/CLIMiniCluster.html)

*Fails with:*
{noformat}
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/mockito/stubbing/Answer
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isNameNodeUp(MiniDFSCluster.java:2666)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isClusterUp(MiniDFSCluster.java:2680)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1510)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:989)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:588)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:530)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:160)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:132)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:320)
Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 9 more{noformat}
{*}Command executed:{*}
{noformat}
bin/mapred minicluster -format{noformat}
*Documentation Issues:*
{noformat}
bin/mapred minicluster -rmport RM_PORT -jhsport JHS_PORT{noformat}

Without -format option it doesn't work the first time telling Namenode isn't 
formatted, So, this should be corrected.


{noformat}
2024-05-04 00:35:52,933 WARN namenode.FSNamesystem: Encountered exception 
loading fsimage
java.io.IOException: NameNode is not formatted.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253)
{noformat}


> Hadoop CLI MiniCluster is broken
> 
>
> Key: HADOOP-19164
> URL: https://issues.apache.org/jira/browse/HADOOP-19164
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Priority: Major
>
> Documentation is also broken & it doesn't work either
> (https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/CLIMiniCluster.html)
> *Fails with:*
> {noformat}
> Exception in

[jira] [Created] (HADOOP-19164) Hadoop CLI MiniCluster is broken

2024-05-03 Thread Ayush Saxena (Jira)
Ayush Saxena created HADOOP-19164:
-

 Summary: Hadoop CLI MiniCluster is broken
 Key: HADOOP-19164
 URL: https://issues.apache.org/jira/browse/HADOOP-19164
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ayush Saxena


Documentation is also broken & it doesn't work either

(https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/CLIMiniCluster.html)

*Fails with:*
{noformat}
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/mockito/stubbing/Answer
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isNameNodeUp(MiniDFSCluster.java:2666)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.isClusterUp(MiniDFSCluster.java:2680)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1510)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:989)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:588)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:530)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.start(MiniHadoopClusterManager.java:160)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.run(MiniHadoopClusterManager.java:132)
at 
org.apache.hadoop.mapreduce.MiniHadoopClusterManager.main(MiniHadoopClusterManager.java:320)
Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 9 more{noformat}
{*}Command executed:{*}
{noformat}
bin/mapred minicluster -format{noformat}
*Documentation Issues:*
{noformat}
bin/mapred minicluster -rmport RM_PORT -jhsport JHS_PORT{noformat}

Without -format option it doesn't work the first time telling Namenode isn't 
formatted, So, this should be corrected.


{noformat}
2024-05-04 00:35:52,933 WARN namenode.FSNamesystem: Encountered exception 
loading fsimage
java.io.IOException: NameNode is not formatted.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:253)
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19107) Drop support for HBase v1 & upgrade HBase v2

2024-05-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843349#comment-17843349
 ] 

Ayush Saxena commented on HADOOP-19107:
---

{quote}add in release notes?
{quote}
Done.
{quote}backport to 3.4.1?
{quote}
It isn't just a version upgrade, but removal of HBase v1 & changing the default 
Hbase jar to HBase v2, not sure if compat allows that.
{quote}does this mean we can strip out parquet 2.5 from our redistributed 
artifacts?
{quote}
I think you mean Protobuf, HBase still defines 2.5.0 in their pom as a compile 
time dependency, but don't use it internally. We can give it a shot by 
excluding it explicitly, they might have kept that for some of their downstream 
consumers (maybe), I am pretty sure not just for compat sake of transitive 
dependency. little bit risky, if it creates runtime issues but I can create a 
ticket and experiment a bit if you say
[https://github.com/apache/hbase/blob/rel/2.5.8/pom.xml#L603]

> Drop support for HBase v1 & upgrade HBase v2
> 
>
> Key: HADOOP-19107
> URL: https://issues.apache.org/jira/browse/HADOOP-19107
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Drop support for Hbase V1 and make building Hbase v2 default.
> Dev List:
> [https://lists.apache.org/thread/vb2gh5ljwncbrmqnk0oflb8ftdz64hhs]
> https://lists.apache.org/thread/o88hnm7q8n3b4bng81q14vsj3fbhfx5w



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19107) Drop support for HBase v1 & upgrade HBase v2

2024-05-03 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-19107:
--
   Flags: Important
Release Note: Drops supports for HBase 1.x release line, The supported 
version HBase version is 2.5.8

> Drop support for HBase v1 & upgrade HBase v2
> 
>
> Key: HADOOP-19107
> URL: https://issues.apache.org/jira/browse/HADOOP-19107
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Drop support for Hbase V1 and make building Hbase v2 default.
> Dev List:
> [https://lists.apache.org/thread/vb2gh5ljwncbrmqnk0oflb8ftdz64hhs]
> https://lists.apache.org/thread/o88hnm7q8n3b4bng81q14vsj3fbhfx5w



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19160) hadoop-auth should not depend on kerb-simplekdc

2024-05-03 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HADOOP-19160:
--
Fix Version/s: 3.4.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> hadoop-auth should not depend on kerb-simplekdc
> ---
>
> Key: HADOOP-19160
> URL: https://issues.apache.org/jira/browse/HADOOP-19160
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: auth
>Affects Versions: 3.4.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.1
>
>
> HADOOP-16179 attempted to remove dependency on {{kerb-simplekdc}} from 
> {{hadoop-common}}.  However, {{hadoop-auth}} still has a compile-scope 
> dependency on the same, and {{hadoop-common}} proper depends on 
> {{hadoop-auth}}.  So {{kerb-simplekdc}} is still a transitive dependency of 
> {{hadoop-common}}.
> {code}
> [INFO] --- maven-dependency-plugin:3.0.2:tree (default-cli) @ hadoop-common 
> ---
> [INFO] org.apache.hadoop:hadoop-common:jar:3.5.0-SNAPSHOT
> ...
> [INFO] +- org.apache.hadoop:hadoop-auth:jar:3.5.0-SNAPSHOT:compile
> ...
> [INFO] |  \- org.apache.kerby:kerb-simplekdc:jar:2.0.3:compile
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843313#comment-17843313
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

steveloughran commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1589326534


##
hadoop-tools/hadoop-azure/src/site/markdown/abfs.md:
##
@@ -609,21 +610,119 @@ In case delegation token is enabled, and the config 
`fs.azure.delegation.token
 
 ### Shared Access Signature (SAS) Token Provider
 
-A Shared Access Signature (SAS) token provider supplies the ABFS connector 
with SAS
-tokens by implementing the SASTokenProvider interface.
-
-```xml
-
-  fs.azure.account.auth.type
-  SAS
-
-
-  fs.azure.sas.token.provider.type
-  
{fully-qualified-class-name-for-implementation-of-SASTokenProvider-interface}
-
-```
-
-The declared class must implement 
`org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider`.
+A shared access signature (SAS) provides secure delegated access to resources 
in
+your storage account. With a SAS, you have granular control over how a client 
can access your data.
+To know more about how SAS Authentication works refer to
+[Grant limited access to Azure Storage resources using shared access 
signatures 
(SAS)](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview)
+
+There are three types of SAS supported by Azure Storage:
+- [User Delegation 
SAS](https://learn.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas):
 Recommended for use with ABFS Driver with HNS Enabled ADLS Gen2 accounts. It 
is Identify based SAS that works at blob/directory level)

Review Comment:
   this is the last change before we merge...





> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843312#comment-17843312
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

steveloughran commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1589603358


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -980,33 +981,59 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
-  Class sasTokenProviderClass =
-  getTokenProviderClass(authType, configKey, null,
-  SASTokenProvider.class);
-
-  Preconditions.checkArgument(sasTokenProviderClass != null,
-  String.format("The configuration value for \"%s\" is invalid.", 
configKey));
-
-  SASTokenProvider sasTokenProvider = ReflectionUtils
-  .newInstance(sasTokenProviderClass, rawConfig);
-  Preconditions.checkArgument(sasTokenProvider != null,
-  String.format("Failed to initialize %s", sasTokenProviderClass));
-
-  LOG.trace("Initializing {}", sasTokenProviderClass.getName());
-  sasTokenProvider.initialize(rawConfig, accountName);
-  LOG.trace("{} init complete", sasTokenProviderClass.getName());
-  return sasTokenProvider;
+  Class customSasTokenProviderImplementation =
+  getTokenProviderClass(authType, FS_AZURE_SAS_TOKEN_PROVIDER_TYPE,
+  null, SASTokenProvider.class);
+  String configuredFixedToken = this.getString(FS_AZURE_SAS_FIXED_TOKEN, 
null);

Review Comment:
   use getTrimmedPasswordString() so JECKS can be used as a store for this



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -980,33 +981,59 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expectin

[jira] [Commented] (HADOOP-18508) support multiple s3a integration test runs on same bucket in parallel

2024-05-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843303#comment-17843303
 ] 

ASF GitHub Bot commented on HADOOP-18508:
-

hadoop-yetus commented on PR #5081:
URL: https://github.com/apache/hadoop/pull/5081#issuecomment-2093533154

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  1s |  |  The patch appears to 
include 17 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 20s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  38m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  17m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 54s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5081/9/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 8 new + 159 unchanged - 2 fixed = 167 total (was 
161)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 34s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m  1s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 289m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5081/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5081 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint 
markdownlint |
   | uname | Linux 224e3c033534 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ad85edda7c00e61b40ad231a03c0fea8cda55ed7 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64

[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-05-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843237#comment-17843237
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

steveloughran commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1589326534


##
hadoop-tools/hadoop-azure/src/site/markdown/abfs.md:
##
@@ -609,21 +610,119 @@ In case delegation token is enabled, and the config 
`fs.azure.delegation.token
 
 ### Shared Access Signature (SAS) Token Provider
 
-A Shared Access Signature (SAS) token provider supplies the ABFS connector 
with SAS
-tokens by implementing the SASTokenProvider interface.
-
-```xml
-
-  fs.azure.account.auth.type
-  SAS
-
-
-  fs.azure.sas.token.provider.type
-  
{fully-qualified-class-name-for-implementation-of-SASTokenProvider-interface}
-
-```
-
-The declared class must implement 
`org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider`.
+A shared access signature (SAS) provides secure delegated access to resources 
in
+your storage account. With a SAS, you have granular control over how a client 
can access your data.
+To know more about how SAS Authentication works refer to
+[Grant limited access to Azure Storage resources using shared access 
signatures 
(SAS)](https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview)
+
+There are three types of SAS supported by Azure Storage:
+- [User Delegation 
SAS](https://learn.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas):
 Recommended for use with ABFS Driver with HNS Enabled ADLS Gen2 accounts. It 
is Identify based SAS that works at blob/directory level)

Review Comment:
   this is the last change before we merge...





> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >