Re: [PR] HDFS-17352. Add configuration to control whether DN delete this replica from disk when client requests a missing block [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6559:
URL: https://github.com/apache/hadoop/pull/6559#issuecomment-1962286525

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 204m  3s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6559/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 310m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6559/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6559 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux e1573661312f 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cada170bb5603c7921ae6eae957dff26deb76c5d |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6559/5/testReport/ |
   | Max. process+thread count | 4349 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

Re: [PR] YARN-11657. Remove protobuf-2.5 as dependency of hadoop-yarn modules [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6575:
URL: https://github.com/apache/hadoop/pull/6575#issuecomment-1962277549

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.4 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 15s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 44s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |  10m 32s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   8m 37s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |  14m 38s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   5m 11s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   5m 29s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  34m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  20m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   9m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   8m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  10m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   5m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   5m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  33m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 628m  0s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6575/3/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 817m 32s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | 
hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6575/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6575 |
   | Optional Tests | dupname asflicense codespell detsecrets compile javac 
javadoc mvninstall mvnsite unit shadedclient xmllint shellcheck shelldocs |
   | uname | Linux 05a9ef72435e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / 24633bcb891f85b9425a9f64e2dc27d12886f3dc |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6575/3/testReport/ |
   | Max. process+thread count | 4931 (vs. ulimit of 5500) |

Re: [PR] YARN-11657. Remove protobuf-2.5 from hadoop-yarn-api module (#6575) [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6581:
URL: https://github.com/apache/hadoop/pull/6581#issuecomment-1962275833

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 11s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  11m 34s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |  18m 11s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   4m 42s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  26m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  24m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  11m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  11m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  14m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   4m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 607m 54s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6581/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 784m 59s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6581/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6581 |
   | Optional Tests | dupname asflicense codespell detsecrets compile javac 
javadoc mvninstall mvnsite unit shadedclient xmllint shellcheck shelldocs |
   | uname | Linux 1044b159e79c 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / f92c528071ae93f216bee610eff68ec33401d8b9 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6581/1/testReport/ |
   | Max. process+thread count | 4223 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6581/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, 

[jira] [Commented] (HADOOP-19082) S3A: Update AWS SDK V2 to 2.24.6

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820281#comment-17820281
 ] 

ASF GitHub Bot commented on HADOOP-19082:
-

hadoop-yetus commented on PR #6568:
URL: https://github.com/apache/hadoop/pull/6568#issuecomment-1962273796

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  15m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |  20m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  48m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  7s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  29m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  16m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  15m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  15m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   8m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  49m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 733m 15s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6568/3/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1018m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6568/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6568 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux 8ba8950c227c 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 
13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 752b1f33ab596284f2b8f85ac1ac5c4a0900c83e |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test 

Re: [PR] HADOOP-19082: Update AWS SDK V2 to 2.24.1 [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6568:
URL: https://github.com/apache/hadoop/pull/6568#issuecomment-1962273796

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  15m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |  20m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  48m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  7s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  29m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  16m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  15m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  15m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   8m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  49m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 733m 15s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6568/3/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1018m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6568/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6568 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux 8ba8950c227c 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 
13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 752b1f33ab596284f2b8f85ac1ac5c4a0900c83e |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6568/3/testReport/ |
   | Max. process+thread count | 4275 (vs. ulimit of 5500) |
   | modules | C: hadoop-project . U: . |
   | Console output | 

Re: [PR] YARN-11626. Optimize ResourceManager's operations on Zookeeper metadata [hadoop]

2024-02-23 Thread via GitHub


XbaoWu commented on code in PR #6577:
URL: https://github.com/apache/hadoop/pull/6577#discussion_r1501352665


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java:
##
@@ -938,28 +938,36 @@ private void handleApplicationAttemptStateOp(
 attemptStateDataPB.getProto().toByteArray();
 LOG.debug("{} info for attempt: {} at: {}", operation, appAttemptId, path);
 
-switch (operation) {
-case UPDATE:
-  if (exists(path)) {
-zkManager.safeSetData(path, attemptStateData, -1, zkAcl,
-fencingNodePath);
-  } else {
-zkManager.safeCreate(path, attemptStateData, zkAcl,
-CreateMode.PERSISTENT, zkAcl, fencingNodePath);
-LOG.debug("Path {} for {} didn't exist. Created a new znode to update"
-+ " the application attempt state.", path, appAttemptId);
+try {
+  switch (operation) {
+  case UPDATE:
+if (exists(path)) {
+  zkManager.safeSetData(path, attemptStateData, -1, zkAcl,
+  fencingNodePath);
+} else {
+  zkManager.safeCreate(path, attemptStateData, zkAcl,
+  CreateMode.PERSISTENT, zkAcl, fencingNodePath);
+  LOG.debug("Path {} for {} didn't exist. Created a new znode to 
update"
+  + " the application attempt state.", path, appAttemptId);
 
+}
+break;
+  case STORE:
+zkManager.safeCreate(path, attemptStateData, zkAcl, 
CreateMode.PERSISTENT,
+zkAcl, fencingNodePath);
+break;
+  case REMOVE:
+zkManager.safeDelete(path, zkAcl, fencingNodePath);
+break;
+  default:
+break;
+  }
+} catch (KeeperException.NoNodeException nne){
+  if(!exists(path)){

Review Comment:
   Thank you. I've modified and tested this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17396. BootstrapStandby should download rollback image during RollingUpgrade [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6583:
URL: https://github.com/apache/hadoop/pull/6583#issuecomment-1962238321

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  34m  5s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   5m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m 22s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  36m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 39s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 50s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 24s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch failed.  |
   | -1 :x: |  compile  |   1m 25s | 
[/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/1/artifact/out/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-project in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  cc  |   1m 25s | 
[/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/1/artifact/out/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-project in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |   1m 25s | 
[/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/1/artifact/out/patch-compile-hadoop-hdfs-project-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-project in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   1m 15s | 
[/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/1/artifact/out/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  hadoop-hdfs-project in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  cc  |   1m 15s | 
[/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6583/1/artifact/out/patch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  hadoop-hdfs-project in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  javac  |   1m 15s | 

Re: [PR] YARN-11626. Optimize ResourceManager's operations on Zookeeper metadata [hadoop]

2024-02-23 Thread via GitHub


hiwangzhihui commented on code in PR #6577:
URL: https://github.com/apache/hadoop/pull/6577#discussion_r1501328992


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java:
##
@@ -938,28 +938,36 @@ private void handleApplicationAttemptStateOp(
 attemptStateDataPB.getProto().toByteArray();
 LOG.debug("{} info for attempt: {} at: {}", operation, appAttemptId, path);
 
-switch (operation) {
-case UPDATE:
-  if (exists(path)) {
-zkManager.safeSetData(path, attemptStateData, -1, zkAcl,
-fencingNodePath);
-  } else {
-zkManager.safeCreate(path, attemptStateData, zkAcl,
-CreateMode.PERSISTENT, zkAcl, fencingNodePath);
-LOG.debug("Path {} for {} didn't exist. Created a new znode to update"
-+ " the application attempt state.", path, appAttemptId);
+try {
+  switch (operation) {
+  case UPDATE:
+if (exists(path)) {
+  zkManager.safeSetData(path, attemptStateData, -1, zkAcl,
+  fencingNodePath);
+} else {
+  zkManager.safeCreate(path, attemptStateData, zkAcl,
+  CreateMode.PERSISTENT, zkAcl, fencingNodePath);
+  LOG.debug("Path {} for {} didn't exist. Created a new znode to 
update"
+  + " the application attempt state.", path, appAttemptId);
 
+}
+break;
+  case STORE:
+zkManager.safeCreate(path, attemptStateData, zkAcl, 
CreateMode.PERSISTENT,
+zkAcl, fencingNodePath);
+break;
+  case REMOVE:
+zkManager.safeDelete(path, zkAcl, fencingNodePath);
+break;
+  default:
+break;
+  }
+} catch (KeeperException.NoNodeException nne){
+  if(!exists(path)){

Review Comment:
   Here, You just need to focus on the NoNodeException of REMOVE action



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17396. BootstrapStandby should download rollback image during RollingUpgrade [hadoop]

2024-02-23 Thread via GitHub


goiri commented on code in PR #6583:
URL: https://github.com/apache/hadoop/pull/6583#discussion_r1501301861


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSImageTestUtil.java:
##
@@ -519,6 +519,23 @@ public static void assertNNHasCheckpoints(MiniDFSCluster 
cluster,
 }
   }
 
+  public static void assertNNHasRollbackCheckpoints(MiniDFSCluster cluster,
+  int nnIdx, List txids) {
+
+for (File nameDir : getNameNodeCurrentDirs(cluster, nnIdx)) {
+  LOG.info("examining name dir with files: " +
+   Joiner.on(",").join(nameDir.listFiles()));
+  // Should have fsimage_N for the three checkpoints
+  LOG.info("Examining storage dir " + nameDir + " with contents: "
+   + StringUtils.join(nameDir.listFiles(), ", "));
+  for (long checkpointTxId : txids) {
+File image = new File(nameDir,
+NNStorage.getRollbackImageFileName(checkpointTxId));
+assertTrue("Expected non-empty " + image, image.length() > 0);

Review Comment:
   OK, I see this pattern already exists.



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java:
##
@@ -208,12 +210,21 @@ public void testRollingUpgradeBootstrapStandby() throws 
Exception {
 
 // BootstrapStandby should fail if the node has a future version
 // and the cluster isn't in rolling upgrade
-bs.setConf(cluster.getConfiguration(1));
+bs.setConf(cluster.getConfiguration(2));
 assertEquals("BootstrapStandby should return ERR_CODE_INVALID_VERSION",
 ERR_CODE_INVALID_VERSION, bs.run(new String[]{"-force"}));
 
 // Start rolling upgrade
 fs.rollingUpgrade(RollingUpgradeAction.PREPARE);
+RollingUpgradeInfo info = fs.rollingUpgrade(RollingUpgradeAction.QUERY);
+while (!info.createdRollbackImages()) {

Review Comment:
   This is unbounded. Can you use LambdaTestUtils#wait or similar?



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestBootstrapStandby.java:
##
@@ -253,6 +269,13 @@ public void testRollingUpgradeBootstrapStandby() throws 
Exception {
 // We should now be able to start the standby successfully
 restartNameNodesFromIndex(1, "-rollingUpgrade", "started");
 
+for (int i = 1; i < maxNNCount; i++) {
+  assertTrue("NameNodes should all have the rollback FSImage",
+  cluster.getNameNode(i).getFSImage().hasRollbackFSImage());

Review Comment:
   Easier to read if you extract the `nn = cluster.getNameNode(i);`



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSImageTestUtil.java:
##
@@ -519,6 +519,23 @@ public static void assertNNHasCheckpoints(MiniDFSCluster 
cluster,
 }
   }
 
+  public static void assertNNHasRollbackCheckpoints(MiniDFSCluster cluster,
+  int nnIdx, List txids) {
+
+for (File nameDir : getNameNodeCurrentDirs(cluster, nnIdx)) {
+  LOG.info("examining name dir with files: " +

Review Comment:
   Can you use the logger format?



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSImageTestUtil.java:
##
@@ -519,6 +519,23 @@ public static void assertNNHasCheckpoints(MiniDFSCluster 
cluster,
 }
   }
 
+  public static void assertNNHasRollbackCheckpoints(MiniDFSCluster cluster,
+  int nnIdx, List txids) {
+
+for (File nameDir : getNameNodeCurrentDirs(cluster, nnIdx)) {
+  LOG.info("examining name dir with files: " +
+   Joiner.on(",").join(nameDir.listFiles()));
+  // Should have fsimage_N for the three checkpoints
+  LOG.info("Examining storage dir " + nameDir + " with contents: "
+   + StringUtils.join(nameDir.listFiles(), ", "));
+  for (long checkpointTxId : txids) {
+File image = new File(nameDir,
+NNStorage.getRollbackImageFileName(checkpointTxId));
+assertTrue("Expected non-empty " + image, image.length() > 0);

Review Comment:
   An assert in the regular code? Should this be an exception?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17381. Distcp of EC files should not be limited to DFS. [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6551:
URL: https://github.com/apache/hadoop/pull/6551#issuecomment-1962171606

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  19m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 18s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  18m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 36s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/2/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   2m 49s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  38m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  18m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  17m 52s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 42s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 7 new + 29 unchanged - 0 fixed = 36 total (was 
29)  |
   | +1 :green_heart: |  mvnsite  |   3m 35s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 11s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/2/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  
hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
 with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 generated 11 new + 0 
unchanged - 0 fixed = 11 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   2m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 23s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 42s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  16m 18s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 314m 24s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 

Re: [PR] HDFS-17381. Distcp of EC files should not be limited to DFS. [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6551:
URL: https://github.com/apache/hadoop/pull/6551#issuecomment-1962143806

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 41s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 38s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/3/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   2m 48s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/3/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  33m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  16m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 17s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/3/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 7 new + 28 unchanged - 0 fixed = 35 total (was 
28)  |
   | +1 :green_heart: |  mvnsite  |   3m 37s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 11s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/3/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  
hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
 with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 generated 11 new + 0 
unchanged - 0 fixed = 11 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   2m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 28s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 44s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  15m 22s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 275m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 

[jira] [Commented] (HADOOP-19084) prune dependency exports of hadoop-* modules

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820192#comment-17820192
 ] 

ASF GitHub Bot commented on HADOOP-19084:
-

hadoop-yetus commented on PR #6582:
URL: https://github.com/apache/hadoop/pull/6582#issuecomment-1961912949

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   8m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  75m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   8m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   8m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  24m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 24s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 25s |  |  
hadoop-yarn-applications-mawo-core in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 122m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6582/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6582 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux ce865468d6a5 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4c459094a0c7825c525e9f8b0a05467e4fc37fae |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6582/1/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6582/1/console |
   | versions | git=2.25.1 

Re: [PR] HADOOP-19084. Prune hadoop-common transitive dependencies (#6574) [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6582:
URL: https://github.com/apache/hadoop/pull/6582#issuecomment-1961912949

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   8m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  75m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   8m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   8m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  24m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 24s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 25s |  |  
hadoop-yarn-applications-mawo-core in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 122m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6582/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6582 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux ce865468d6a5 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4c459094a0c7825c525e9f8b0a05467e4fc37fae |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6582/1/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6582/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log 

[jira] [Commented] (HADOOP-19084) prune dependency exports of hadoop-* modules

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820159#comment-17820159
 ] 

ASF GitHub Bot commented on HADOOP-19084:
-

steveloughran opened a new pull request, #6582:
URL: https://github.com/apache/hadoop/pull/6582

   #6574 for trunk
   
   Exclude more artifacts which are dependencies of hadoop-* modules, with the 
goal of keeping conflict out of downstream applications.
   
   In particular we have pruned the dependencies of of: -zookeeper
   -other libraries referencing logging
   
   This keeps slf4j-log4j12 and log4j12 off the classpath of applications 
importing hadoop-common.
   
   Somehow logback references do still surface; applications pulling in 
hadoop-common directly or indirectly should review their imports carefully.
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> prune dependency exports of hadoop-* modules
> 
>
> Key: HADOOP-19084
> URL: https://issues.apache.org/jira/browse/HADOOP-19084
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> this is probably caused by HADOOP-18613:
> ZK is pulling in some extra transitive stuff which surfaces in applications 
> which import hadoop-common into their poms. It doesn't seem to show up in our 
> distro, but downstream you get warnings about duplicate logging stuff
> {code}
> |  +- org.apache.zookeeper:zookeeper:jar:3.8.3:compile
> |  |  +- org.apache.zookeeper:zookeeper-jute:jar:3.8.3:compile
> |  |  |  \- (org.apache.yetus:audience-annotations:jar:0.12.0:compile - 
> omitted for duplicate)
> |  |  +- org.apache.yetus:audience-annotations:jar:0.12.0:compile
> |  |  +- (io.netty:netty-handler:jar:4.1.94.Final:compile - omitted for 
> conflict with 4.1.100.Final)
> |  |  +- (io.netty:netty-transport-native-epoll:jar:4.1.94.Final:compile - 
> omitted for conflict with 4.1.100.Final)
> |  |  +- (org.slf4j:slf4j-api:jar:1.7.30:compile - omitted for duplicate)
> |  |  +- ch.qos.logback:logback-core:jar:1.2.10:compile
> |  |  +- ch.qos.logback:logback-classic:jar:1.2.10:compile
> |  |  |  +- (ch.qos.logback:logback-core:jar:1.2.10:compile - omitted for 
> duplicate)
> |  |  |  \- (org.slf4j:slf4j-api:jar:1.7.32:compile - omitted for conflict 
> with 1.7.30)
> |  |  \- (commons-io:commons-io:jar:2.11.0:compile - omitted for conflict 
> with 2.14.0)
> {code}
> proposed: exclude the zk dependencies we either override outselves or don't 
> need. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19084. Prune hadoop-common transitive dependencies (#6574) [hadoop]

2024-02-23 Thread via GitHub


steveloughran opened a new pull request, #6582:
URL: https://github.com/apache/hadoop/pull/6582

   #6574 for trunk
   
   Exclude more artifacts which are dependencies of hadoop-* modules, with the 
goal of keeping conflict out of downstream applications.
   
   In particular we have pruned the dependencies of of: -zookeeper
   -other libraries referencing logging
   
   This keeps slf4j-log4j12 and log4j12 off the classpath of applications 
importing hadoop-common.
   
   Somehow logback references do still surface; applications pulling in 
hadoop-common directly or indirectly should review their imports carefully.
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19084) prune dependency exports of hadoop-* modules

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820158#comment-17820158
 ] 

ASF GitHub Bot commented on HADOOP-19084:
-

steveloughran merged PR #6574:
URL: https://github.com/apache/hadoop/pull/6574




> prune dependency exports of hadoop-* modules
> 
>
> Key: HADOOP-19084
> URL: https://issues.apache.org/jira/browse/HADOOP-19084
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> this is probably caused by HADOOP-18613:
> ZK is pulling in some extra transitive stuff which surfaces in applications 
> which import hadoop-common into their poms. It doesn't seem to show up in our 
> distro, but downstream you get warnings about duplicate logging stuff
> {code}
> |  +- org.apache.zookeeper:zookeeper:jar:3.8.3:compile
> |  |  +- org.apache.zookeeper:zookeeper-jute:jar:3.8.3:compile
> |  |  |  \- (org.apache.yetus:audience-annotations:jar:0.12.0:compile - 
> omitted for duplicate)
> |  |  +- org.apache.yetus:audience-annotations:jar:0.12.0:compile
> |  |  +- (io.netty:netty-handler:jar:4.1.94.Final:compile - omitted for 
> conflict with 4.1.100.Final)
> |  |  +- (io.netty:netty-transport-native-epoll:jar:4.1.94.Final:compile - 
> omitted for conflict with 4.1.100.Final)
> |  |  +- (org.slf4j:slf4j-api:jar:1.7.30:compile - omitted for duplicate)
> |  |  +- ch.qos.logback:logback-core:jar:1.2.10:compile
> |  |  +- ch.qos.logback:logback-classic:jar:1.2.10:compile
> |  |  |  +- (ch.qos.logback:logback-core:jar:1.2.10:compile - omitted for 
> duplicate)
> |  |  |  \- (org.slf4j:slf4j-api:jar:1.7.32:compile - omitted for conflict 
> with 1.7.30)
> |  |  \- (commons-io:commons-io:jar:2.11.0:compile - omitted for conflict 
> with 2.14.0)
> {code}
> proposed: exclude the zk dependencies we either override outselves or don't 
> need. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19084. Prune some the transitive dependencies of hadoop-common [hadoop]

2024-02-23 Thread via GitHub


steveloughran merged PR #6574:
URL: https://github.com/apache/hadoop/pull/6574


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19065) Update Protocol Buffers installation to 3.21.12

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820156#comment-17820156
 ] 

ASF GitHub Bot commented on HADOOP-19065:
-

steveloughran commented on PR #6576:
URL: https://github.com/apache/hadoop/pull/6576#issuecomment-1961748672

   I've created https://issues.apache.org/jira/browse/HADOOP-19087 as the JIRA 
for follow-on stuff for 3.4.1, which is going to mainly be packaging/jar 
changes. let's target that one with this




> Update Protocol Buffers installation to 3.21.12 
> 
>
> Key: HADOOP-19065
> URL: https://issues.apache.org/jira/browse/HADOOP-19065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: huangzhaobo
>Assignee: huangzhaobo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Update docs and docker script to cover downloading the 3.21.12 protobuf 
> compiler



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19065. Update Protocol Buffers installation to 3.21.12 [hadoop]

2024-02-23 Thread via GitHub


steveloughran commented on PR #6576:
URL: https://github.com/apache/hadoop/pull/6576#issuecomment-1961748672

   I've created https://issues.apache.org/jira/browse/HADOOP-19087 as the JIRA 
for follow-on stuff for 3.4.1, which is going to mainly be packaging/jar 
changes. let's target that one with this


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19083) provide hadoop binary tarball without aws v2 sdk

2024-02-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19083:

Summary: provide hadoop binary tarball without aws v2 sdk  (was: hadoop 
binary tarball to exclude aws v2 sdk)

> provide hadoop binary tarball without aws v2 sdk
> 
>
> Key: HADOOP-19083
> URL: https://issues.apache.org/jira/browse/HADOOP-19083
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> Have the default hadoop binary .tar.gz exclude the aws v2 sdk by default. 
> This SDK brings the total size of the distribution to about 1 GB.
> Proposed
> * add a profile to include the aws sdk in the dist module
> * disable it by default
> Instead we document which version is needed. 
> The hadoop-aws and hadoop-cloud storage maven artifacts will declare their 
> dependencies, so apps building with those get to do the download.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19065) Update Protocol Buffers installation to 3.21.12

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820148#comment-17820148
 ] 

ASF GitHub Bot commented on HADOOP-19065:
-

steveloughran commented on PR #6576:
URL: https://github.com/apache/hadoop/pull/6576#issuecomment-1961725347

   I concur




> Update Protocol Buffers installation to 3.21.12 
> 
>
> Key: HADOOP-19065
> URL: https://issues.apache.org/jira/browse/HADOOP-19065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: huangzhaobo
>Assignee: huangzhaobo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Update docs and docker script to cover downloading the 3.21.12 protobuf 
> compiler



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19065. Update Protocol Buffers installation to 3.21.12 [hadoop]

2024-02-23 Thread via GitHub


steveloughran commented on PR #6576:
URL: https://github.com/apache/hadoop/pull/6576#issuecomment-1961725347

   I concur


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] YARN-11657. Remove protobuf-2.5 from hadoop-yarn-api module (#6575) [hadoop]

2024-02-23 Thread via GitHub


steveloughran opened a new pull request, #6581:
URL: https://github.com/apache/hadoop/pull/6581

   
   PR #6575 for branch-3.3
   
   The import of protobuf-java-2.5 in the hadoop-yarn-api module is downgraded 
from "compile" to "provided"
   
   This removes it from share/hadoop/yarn/lib/protobuf-java-2.5.0.jar
   
   It is still found under
   share/hadoop/yarn/timelineservice/lib/protobuf-java-2.5.0.jar
   
   Note: this PR also updates LICENSE-binary reference to protobuf 3.6 to 
3.21.12
   to match what in hadoop-thirdparty; the merge created conflict at the text
   level, and rather than fix it by picking 3.6 or 3.7 I fixed it to the current
   value
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [X] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18910) ABFS: Adding Support for MD5 Hash based integrity verification of the request content during transport

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820145#comment-17820145
 ] 

ASF GitHub Bot commented on HADOOP-18910:
-

mukund-thakur commented on PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#issuecomment-1961713969

   can you please create a backport PR on branch-3.4 and run the tests? 




> ABFS: Adding Support for MD5 Hash based integrity verification of the request 
> content during transport 
> ---
>
> Key: HADOOP-18910
> URL: https://issues.apache.org/jira/browse/HADOOP-18910
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Azure Storage Supports Content-MD5 Request Headers in Both Read and Append 
> APIs.
> Read: [Path - Read - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read]
> Append: [Path - Update - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/update]
> This change is to make client-side changes to support them. In Read request, 
> we will send the appropriate header in response to which server will return 
> the MD5 Hash of the data it sends back. On Client we will tally this with the 
> MD5 hash computed from the data received.
> In Append request, we will compute the MD5 Hash of the data that we are 
> sending to the server and specify that in appropriate header. Server on 
> finding that header will tally this with the MD5 hash it will compute on the 
> data received. 
> This whole Checksum Validation Support is guarded behind a config, Config is 
> by default disabled because with the use of "https" integrity of data is 
> preserved anyways. This is introduced as an additional data integrity check 
> which will have a performance impact as well.
> Users can decide if they want to enable this or not by setting the following 
> config to *"true"* or *"false"* respectively. *Config: 
> "fs.azure.enable.checksum.validation"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18910: [ABFS] Adding Support for MD5 Hash based integrity verification of the request content during transport [hadoop]

2024-02-23 Thread via GitHub


mukund-thakur commented on PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#issuecomment-1961713969

   can you please create a backport PR on branch-3.4 and run the tests? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2024-02-23 Thread via GitHub


mukund-thakur commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1961713553

   can you please create a backport PR on branch-3.4 and run the tests? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11657. Remove protobuf-2.5 from hadoop-yarn-api module (#6575) [hadoop]

2024-02-23 Thread via GitHub


steveloughran commented on PR #6580:
URL: https://github.com/apache/hadoop/pull/6580#issuecomment-1961701109

   This is #6575 for trunk


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] YARN-11657. Remove protobuf-2.5 from hadoop-yarn-api module (#6575) [hadoop]

2024-02-23 Thread via GitHub


steveloughran opened a new pull request, #6580:
URL: https://github.com/apache/hadoop/pull/6580

   
   The import of protobuf-java-2.5 in the hadoop-yarn-api module is downgraded 
from "compile" to "provided"
   
   This removes it from share/hadoop/yarn/lib/protobuf-java-2.5.0.jar
   
   It is still found under
   share/hadoop/yarn/timelineservice/lib/protobuf-java-2.5.0.jar
   
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [X] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11657. Remove protobuf-2.5 as dependency of hadoop-yarn modules [hadoop]

2024-02-23 Thread via GitHub


steveloughran merged PR #6575:
URL: https://github.com/apache/hadoop/pull/6575


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11657. Remove protobuf-2.5 as dependency of hadoop-yarn modules [hadoop]

2024-02-23 Thread via GitHub


steveloughran commented on PR #6575:
URL: https://github.com/apache/hadoop/pull/6575#issuecomment-1961692880

   pushed up a change which updates the building doc


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19018) Release Hadoop 3.4.0

2024-02-23 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820120#comment-17820120
 ] 

Steve Loughran commented on HADOOP-19018:
-

I've just created HADOOP-19087; I'm gong to move my packaging PRs I think are 
too late/risky for 3.4.0 to there.

> Release Hadoop 3.4.0
> 
>
> Key: HADOOP-19018
> URL: https://issues.apache.org/jira/browse/HADOOP-19018
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.4.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>
> Confirmed features to be included in the release:
> - Enhanced functionality for YARN Federation.
> - Redesigned resource allocation in YARN Capacity Scheduler
> - Optimization of HDFS RBF.
> - Introduction of fine-grained global locks for DataNodes.
> - Improvements in the stability of HDFS EC, and more.
> - Fixes for important CVEs.
> *Issues that need to be addressed in hadoop-3.4.0-RC0 version.*
> 1.  confirm the JIRA target version/fix version is 3.4.0 to ensure that the 
> version setting is correct.
> 2. confirm the highlight of hadoop-3.4.0.
> 3. backport branch-3.4.0/branch-3.4.
> {code:java}
>   HADOOP-19040. mvn site commands fails due to MetricsSystem And 
> MetricsSystemImpl changes. 
>   YARN-11634. [Addendum] Speed-up TestTimelineClient. 
>   MAPREDUCE-7468. [Addendum] Fix TestMapReduceChildJVM unit tests.
>   
>   Revert HDFS-16016. BPServiceActor to provide new thread to handle IBR.
> {code}
> 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19087) Release Hadoop 3.4.1

2024-02-23 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-19087:
---

 Summary: Release Hadoop 3.4.1
 Key: HADOOP-19087
 URL: https://issues.apache.org/jira/browse/HADOOP-19087
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.4.0
Reporter: Steve Loughran


Release a minor update to hadoop 3.4.0 with

* packaging enhancements
* updated dependencies (where viable)
* fixes for critical issues found after 3.4.0 released
* low-risk feature enhancements (those which don't impact schedule...)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19018) Release Hadoop 3.4.0

2024-02-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19018:

Summary: Release Hadoop 3.4.0  (was: Release 3.4.0)

> Release Hadoop 3.4.0
> 
>
> Key: HADOOP-19018
> URL: https://issues.apache.org/jira/browse/HADOOP-19018
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.4.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>
> Confirmed features to be included in the release:
> - Enhanced functionality for YARN Federation.
> - Redesigned resource allocation in YARN Capacity Scheduler
> - Optimization of HDFS RBF.
> - Introduction of fine-grained global locks for DataNodes.
> - Improvements in the stability of HDFS EC, and more.
> - Fixes for important CVEs.
> *Issues that need to be addressed in hadoop-3.4.0-RC0 version.*
> 1.  confirm the JIRA target version/fix version is 3.4.0 to ensure that the 
> version setting is correct.
> 2. confirm the highlight of hadoop-3.4.0.
> 3. backport branch-3.4.0/branch-3.4.
> {code:java}
>   HADOOP-19040. mvn site commands fails due to MetricsSystem And 
> MetricsSystemImpl changes. 
>   YARN-11634. [Addendum] Speed-up TestTimelineClient. 
>   MAPREDUCE-7468. [Addendum] Fix TestMapReduceChildJVM unit tests.
>   
>   Revert HDFS-16016. BPServiceActor to provide new thread to handle IBR.
> {code}
> 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19065) Update Protocol Buffers installation to 3.21.12

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820116#comment-17820116
 ] 

ASF GitHub Bot commented on HADOOP-19065:
-

steveloughran commented on PR #6526:
URL: https://github.com/apache/hadoop/pull/6526#issuecomment-1961583185

   @slfan1989 docker worked on the 3.3.6 release for me with macbook m1 for arm 
and AMD x86 on EC2 for x86 release. doing that in US-west-2 built pretty fast 
as that is where the maven central repo is




> Update Protocol Buffers installation to 3.21.12 
> 
>
> Key: HADOOP-19065
> URL: https://issues.apache.org/jira/browse/HADOOP-19065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: huangzhaobo
>Assignee: huangzhaobo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Update docs and docker script to cover downloading the 3.21.12 protobuf 
> compiler



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19065. Improve Protocol Buffers installation [hadoop]

2024-02-23 Thread via GitHub


steveloughran commented on PR #6526:
URL: https://github.com/apache/hadoop/pull/6526#issuecomment-1961583185

   @slfan1989 docker worked on the 3.3.6 release for me with macbook m1 for arm 
and AMD x86 on EC2 for x86 release. doing that in US-west-2 built pretty fast 
as that is where the maven central repo is


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19082: Update AWS SDK V2 to 2.24.1 [hadoop]

2024-02-23 Thread via GitHub


steveloughran commented on PR #6568:
URL: https://github.com/apache/hadoop/pull/6568#issuecomment-1961579837

   @HarshitGupta11 need to know where you qualified the SDK...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11657. Remove protobuf-2.5 as dependency of hadoop-yarn modules [hadoop]

2024-02-23 Thread via GitHub


steveloughran commented on code in PR #6575:
URL: https://github.com/apache/hadoop/pull/6575#discussion_r1500859099


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml:
##
@@ -72,6 +72,7 @@
 
   com.google.protobuf
   protobuf-java
+  ${transient.protobuf2.scope}

Review Comment:
   1. The dependent poms (hadoop-common...) aren't publishing it any more, just 
using it for compilation.
   2. I don't trust the base declaration to do the right thing.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11657. Remove protobuf-2.5 as dependency of hadoop-yarn modules [hadoop]

2024-02-23 Thread via GitHub


steveloughran commented on PR #6575:
URL: https://github.com/apache/hadoop/pull/6575#issuecomment-1961575509

   I don't want to backport this to 3.4.0; i'm going to merge into branch-3.4 
and trunk but not 3.4.0 as the lib is still needed in one place and there's a 
risk of breaking things.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19071) Update maven-surefire-plugin from 3.0.0 to 3.2.5

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820109#comment-17820109
 ] 

ASF GitHub Bot commented on HADOOP-19071:
-

steveloughran commented on PR #6537:
URL: https://github.com/apache/hadoop/pull/6537#issuecomment-1961569379

   pity. I do think those failures need to be addressed, as they are less test 
runner problems than brittle tests *which will need to be fixed eventually*




> Update maven-surefire-plugin from 3.0.0 to 3.2.5  
> -
>
> Key: HADOOP-19071
> URL: https://issues.apache.org/jira/browse/HADOOP-19071
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, common
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19071. Update maven-surefire-plugin from 3.0.0 to 3.2.5. [hadoop]

2024-02-23 Thread via GitHub


steveloughran commented on PR #6537:
URL: https://github.com/apache/hadoop/pull/6537#issuecomment-1961569379

   pity. I do think those failures need to be addressed, as they are less test 
runner problems than brittle tests *which will need to be fixed eventually*


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17394. [FGL] Remove unused WriteHoldCount of FSNamesystemLock [hadoop]

2024-02-23 Thread via GitHub


xinglin commented on PR #6571:
URL: https://github.com/apache/hadoop/pull/6571#issuecomment-1961467118

   can we trigger a new build?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17365. EC: Add extra redunency configuration in checkStreamerFailures to prevent data loss. [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6517:
URL: https://github.com/apache/hadoop/pull/6517#issuecomment-1961295707

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 58s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   2m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m 28s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  20m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   2m 43s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/4/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 39s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 28 new + 33 unchanged - 0 fixed = 
61 total (was 33)  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 49s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  84m 49s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +0 :ok: |  asflicense  |   0m 24s |  |  ASF License check generated no 
output?  |
   |  |   | 201m  0s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
   |   | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyExcludeSlowNodes |
   |   | hadoop.hdfs.server.mover.TestStorageMover |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.TestDatanodeLayoutUpgrade |
   |   | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
   |   | hadoop.hdfs.tools.TestDebugAdmin |
   |   | 

Re: [PR] YARN-11657. Remove protobuf-2.5 as dependency of hadoop-yarn modules [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6575:
URL: https://github.com/apache/hadoop/pull/6575#issuecomment-1961147527

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.4 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  43m  1s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |  17m 43s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 59s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |  22m 44s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   8m 38s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 39s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  50m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  31m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  16m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  16m  4s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  15m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   8m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   7m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  49m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 799m 43s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6575/2/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1094m  1s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6575/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6575 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux 35c13fbe0370 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / a726eb4e4079467e91b46316dcfd56bb220589d8 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6575/2/testReport/ |
   | Max. process+thread count | 2773 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Commented] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-02-23 Thread Emil Kleszcz (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820012#comment-17820012
 ] 

Emil Kleszcz commented on HADOOP-19052:
---

Hi [~yl],

Thank you for your prompt reply, the disks ain't full and I do not observe any 
network instabilities in our monitoring + mco between the cluster nodes. We 
also have long-running heavy-write (also append) spark jobs. I guess this might 
be a similar case to yours.

Cheers,
Emil

> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
>Reporter: liang yu
>Priority: Major
>  Labels: pull-request-available
> Attachments: debuglog.png
>
>
> Using Hadoop 3.3.4
>  
> When the QPS of `append` executions is very high, at a rate of above 1/s. 
>  
> We found that the write speed in hadoop is very slow. We traced some 
> datanodes' log and find that there is a warning :
> {code:java}
> 2024-01-26 11:09:44,292 WARN impl.FsDatasetImpl 
> (InstrumentedLock.java:logwaitWarning(165)) Waited above threshold(300 ms) to 
> acquire lock: lock identifier: FsDatasetRwlock waitTimeMs=336 ms.Suppressed 0 
> lock wait warnings.Longest supressed waitTimeMs=0.The stack trace is
> java.lang.Thread,getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1060)
> org.apache.hadoop.util.Instrumentedlock.logWaitWarning(InstrumentedLock.java:171)
> org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222)
> org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock, iaya:105)
> org.apache.hadoop.util.AutocloseableLock.acquire(AutocloseableLock.java:67)
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:1239)
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:230)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver 
> (DataXceiver.java:1313)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock 
> (DataXceiver.java:764)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:176)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:110)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:293)
> java.lang.Thread.run(Thread.java:748)
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file which is 
> really slow.
> !debuglog.png!
>  
> We traced the code and  find that java1.8 use a Shell Command to get the 
> linkCount, in which execution it will start a new Process and wait for the 
> Process to fork, when the QPS is very high, it will sometimes take a long 
> time to fork the process.
> Here is the shell command.
> {code:java}
> stat -c%h /path/to/file
> {code}
>  
> Solution:
> For the FileStore that supports the file attributes "unix", we can use the 
> method _Files.getAttribute(f.toPath(), "unix:nlink")_ to get the linkCount, 
> this method doesn't need to start a new process, and will return the result 
> in a very short time.
>  
> When we use this method to get the file linkCount, we rarely get the WARN log 
> above when the QPS of append execution is high.
> .
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819996#comment-17819996
 ] 

ASF GitHub Bot commented on HADOOP-19052:
-

hadoop-yetus commented on PR #6527:
URL: https://github.com/apache/hadoop/pull/6527#issuecomment-1961012858

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. 
 |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  The patch fails to run checkstyle in hadoop-common  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |   0m 22s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 22s | 

Re: [PR] HADOOP-19052.Hadoop use Shell command to get the count of the hard link which takes a lot of time [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6527:
URL: https://github.com/apache/hadoop/pull/6527#issuecomment-1961012858

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 22s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. 
 |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  The patch fails to run checkstyle in hadoop-common  |
   | -1 :x: |  mvnsite  |   0m 22s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |   0m 22s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/3/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  javac  |   0m 22s | 

Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6566:
URL: https://github.com/apache/hadoop/pull/6566#issuecomment-1960990799

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  20m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  18m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   5m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   3m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 41s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   2m 52s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/7/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  shadedclient  |   5m 15s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  18m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  17m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 41s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/7/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 17 new + 244 unchanged - 2 fixed = 261 total (was 
246)  |
   | +1 :green_heart: |  mvnsite  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   2m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   3m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  39m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 37s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 47s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 275m 42s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 525m 10s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 

[jira] [Commented] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819986#comment-17819986
 ] 

ASF GitHub Bot commented on HADOOP-19052:
-

hadoop-yetus commented on PR #6527:
URL: https://github.com/apache/hadoop/pull/6527#issuecomment-1960975979

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 21s | 
[/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 23s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. 
 |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  The patch fails to run checkstyle in hadoop-common  |
   | -1 :x: |  mvnsite  |   0m 21s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javadoc  |   0m 21s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |   0m 22s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 22s | 

Re: [PR] HADOOP-19052.Hadoop use Shell command to get the count of the hard link which takes a lot of time [hadoop]

2024-02-23 Thread via GitHub


hadoop-yetus commented on PR #6527:
URL: https://github.com/apache/hadoop/pull/6527#issuecomment-1960975979

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 21s | 
[/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 23s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in trunk failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08. 
 |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/buildtool-branch-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  The patch fails to run checkstyle in hadoop-common  |
   | -1 :x: |  mvnsite  |   0m 21s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javadoc  |   0m 21s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  hadoop-common in trunk failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  spotbugs  |   0m 22s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   2m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |   0m 22s | 
[/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  javac  |   0m 22s | 

[jira] [Commented] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-02-23 Thread liang yu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819980#comment-17819980
 ] 

liang yu commented on HADOOP-19052:
---

Hi,[~tr0k] 

It seems that your error occurs because of network instability, which is 
different from mine.

In my situation, I have a spark streaming program which is always appending to 
many files, which will cause a high QPS of append execution.

> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
>Reporter: liang yu
>Priority: Major
>  Labels: pull-request-available
> Attachments: debuglog.png
>
>
> Using Hadoop 3.3.4
>  
> When the QPS of `append` executions is very high, at a rate of above 1/s. 
>  
> We found that the write speed in hadoop is very slow. We traced some 
> datanodes' log and find that there is a warning :
> {code:java}
> 2024-01-26 11:09:44,292 WARN impl.FsDatasetImpl 
> (InstrumentedLock.java:logwaitWarning(165)) Waited above threshold(300 ms) to 
> acquire lock: lock identifier: FsDatasetRwlock waitTimeMs=336 ms.Suppressed 0 
> lock wait warnings.Longest supressed waitTimeMs=0.The stack trace is
> java.lang.Thread,getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1060)
> org.apache.hadoop.util.Instrumentedlock.logWaitWarning(InstrumentedLock.java:171)
> org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222)
> org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock, iaya:105)
> org.apache.hadoop.util.AutocloseableLock.acquire(AutocloseableLock.java:67)
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:1239)
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:230)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver 
> (DataXceiver.java:1313)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock 
> (DataXceiver.java:764)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:176)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:110)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:293)
> java.lang.Thread.run(Thread.java:748)
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file which is 
> really slow.
>  
> We traced the code and  find that java1.8 use a Shell Command to get the 
> linkCount, in which execution it will start a new Process and wait for the 
> Process to fork, when the QPS is very high, it will sometimes take a long 
> time to fork the process.
> Here is the shell command.
> {code:java}
> stat -c%h /path/to/file
> {code}
>  
> Solution:
> For the FileStore that supports the file attributes "unix", we can use the 
> method _Files.getAttribute(f.toPath(), "unix:nlink")_ to get the linkCount, 
> this method doesn't need to start a new process, and will return the result 
> in a very short time.
>  
> When we use this method to get the file linkCount, we rarely get the WARN log 
> above when the QPS of append execution is high.
> .
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-02-23 Thread liang yu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819980#comment-17819980
 ] 

liang yu edited comment on HADOOP-19052 at 2/23/24 9:11 AM:


Hi,[~tr0k] 

It seems that your error occurs because of network instability or disk full, 
which is different from mine.

In my situation, I have a spark streaming program which is always appending to 
many files, which will cause a high QPS of append execution, when there appears 
a lot of append execution, the _stat shell command_ in append execution will 
take a lot of time.

 

So my patch mainly focus on how to avoid such problems because _stat shell 
command_ will cost a lot of time under the situation where there is a high QPS 
of append execution.


was (Author: JIRAUSER299608):
Hi,[~tr0k] 

It seems that your error occurs because of network instability or disk full, 
which is different from mine.

In my situation, I have a spark streaming program which is always appending to 
many files, which will cause a high QPS of append execution.

 

So my patch mainly focus on how to avoid such problems under the situation 
where there is  a high QPS of append execution.

> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
>Reporter: liang yu
>Priority: Major
>  Labels: pull-request-available
> Attachments: debuglog.png
>
>
> Using Hadoop 3.3.4
>  
> When the QPS of `append` executions is very high, at a rate of above 1/s. 
>  
> We found that the write speed in hadoop is very slow. We traced some 
> datanodes' log and find that there is a warning :
> {code:java}
> 2024-01-26 11:09:44,292 WARN impl.FsDatasetImpl 
> (InstrumentedLock.java:logwaitWarning(165)) Waited above threshold(300 ms) to 
> acquire lock: lock identifier: FsDatasetRwlock waitTimeMs=336 ms.Suppressed 0 
> lock wait warnings.Longest supressed waitTimeMs=0.The stack trace is
> java.lang.Thread,getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1060)
> org.apache.hadoop.util.Instrumentedlock.logWaitWarning(InstrumentedLock.java:171)
> org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222)
> org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock, iaya:105)
> org.apache.hadoop.util.AutocloseableLock.acquire(AutocloseableLock.java:67)
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:1239)
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:230)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver 
> (DataXceiver.java:1313)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock 
> (DataXceiver.java:764)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:176)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:110)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:293)
> java.lang.Thread.run(Thread.java:748)
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file which is 
> really slow.
> !debuglog.png!
>  
> We traced the code and  find that java1.8 use a Shell Command to get the 
> linkCount, in which execution it will start a new Process and wait for the 
> Process to fork, when the QPS is very high, it will sometimes take a long 
> time to fork the process.
> Here is the shell command.
> {code:java}
> stat -c%h /path/to/file
> {code}
>  
> Solution:
> For the FileStore that supports the file attributes "unix", we can use the 
> method _Files.getAttribute(f.toPath(), "unix:nlink")_ to get the linkCount, 
> this method doesn't need to start a new process, and will return the result 
> in a very short time.
>  
> When we use this method to get the file linkCount, we rarely get the WARN log 
> above when the QPS of append execution is high.
> .
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-02-23 Thread liang yu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819980#comment-17819980
 ] 

liang yu edited comment on HADOOP-19052 at 2/23/24 9:06 AM:


Hi,[~tr0k] 

It seems that your error occurs because of network instability or disk full, 
which is different from mine.

In my situation, I have a spark streaming program which is always appending to 
many files, which will cause a high QPS of append execution.

 

So my patch mainly focus on how to avoid such problems under the situation 
where there is  a high QPS of append execution.


was (Author: JIRAUSER299608):
Hi,[~tr0k] 

It seems that your error occurs because of network instability, which is 
different from mine.

In my situation, I have a spark streaming program which is always appending to 
many files, which will cause a high QPS of append execution.

> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
>Reporter: liang yu
>Priority: Major
>  Labels: pull-request-available
> Attachments: debuglog.png
>
>
> Using Hadoop 3.3.4
>  
> When the QPS of `append` executions is very high, at a rate of above 1/s. 
>  
> We found that the write speed in hadoop is very slow. We traced some 
> datanodes' log and find that there is a warning :
> {code:java}
> 2024-01-26 11:09:44,292 WARN impl.FsDatasetImpl 
> (InstrumentedLock.java:logwaitWarning(165)) Waited above threshold(300 ms) to 
> acquire lock: lock identifier: FsDatasetRwlock waitTimeMs=336 ms.Suppressed 0 
> lock wait warnings.Longest supressed waitTimeMs=0.The stack trace is
> java.lang.Thread,getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1060)
> org.apache.hadoop.util.Instrumentedlock.logWaitWarning(InstrumentedLock.java:171)
> org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222)
> org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock, iaya:105)
> org.apache.hadoop.util.AutocloseableLock.acquire(AutocloseableLock.java:67)
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:1239)
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:230)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver 
> (DataXceiver.java:1313)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock 
> (DataXceiver.java:764)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:176)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:110)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:293)
> java.lang.Thread.run(Thread.java:748)
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file which is 
> really slow.
> !debuglog.png!
>  
> We traced the code and  find that java1.8 use a Shell Command to get the 
> linkCount, in which execution it will start a new Process and wait for the 
> Process to fork, when the QPS is very high, it will sometimes take a long 
> time to fork the process.
> Here is the shell command.
> {code:java}
> stat -c%h /path/to/file
> {code}
>  
> Solution:
> For the FileStore that supports the file attributes "unix", we can use the 
> method _Files.getAttribute(f.toPath(), "unix:nlink")_ to get the linkCount, 
> this method doesn't need to start a new process, and will return the result 
> in a very short time.
>  
> When we use this method to get the file linkCount, we rarely get the WARN log 
> above when the QPS of append execution is high.
> .
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-02-23 Thread liang yu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang yu updated HADOOP-19052:
--
Description: 
Using Hadoop 3.3.4

 

When the QPS of `append` executions is very high, at a rate of above 1/s. 

 

We found that the write speed in hadoop is very slow. We traced some datanodes' 
log and find that there is a warning :
{code:java}
2024-01-26 11:09:44,292 WARN impl.FsDatasetImpl 
(InstrumentedLock.java:logwaitWarning(165)) Waited above threshold(300 ms) to 
acquire lock: lock identifier: FsDatasetRwlock waitTimeMs=336 ms.Suppressed 0 
lock wait warnings.Longest supressed waitTimeMs=0.The stack trace is
java.lang.Thread,getStackTrace(Thread.java:1559)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1060)
org.apache.hadoop.util.Instrumentedlock.logWaitWarning(InstrumentedLock.java:171)
org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222)
org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock, iaya:105)
org.apache.hadoop.util.AutocloseableLock.acquire(AutocloseableLock.java:67)
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:1239)
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:230)
org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver 
(DataXceiver.java:1313)
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock 
(DataXceiver.java:764)
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:176)
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:110)
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:293)
java.lang.Thread.run(Thread.java:748)
{code}
 

Then we traced the method 
_org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
 java:1239),_ and print how long each command take to finish the execution, and 
find that it takes us 700ms to get the linkCount of the file which is really 
slow.

!debuglog.png!

 

We traced the code and  find that java1.8 use a Shell Command to get the 
linkCount, in which execution it will start a new Process and wait for the 
Process to fork, when the QPS is very high, it will sometimes take a long time 
to fork the process.

Here is the shell command.
{code:java}
stat -c%h /path/to/file
{code}
 

Solution:

For the FileStore that supports the file attributes "unix", we can use the 
method _Files.getAttribute(f.toPath(), "unix:nlink")_ to get the linkCount, 
this method doesn't need to start a new process, and will return the result in 
a very short time.

 

When we use this method to get the file linkCount, we rarely get the WARN log 
above when the QPS of append execution is high.

.

 

  was:
Using Hadoop 3.3.4

 

When the QPS of `append` executions is very high, at a rate of above 1/s. 

 

We found that the write speed in hadoop is very slow. We traced some datanodes' 
log and find that there is a warning :
{code:java}
2024-01-26 11:09:44,292 WARN impl.FsDatasetImpl 
(InstrumentedLock.java:logwaitWarning(165)) Waited above threshold(300 ms) to 
acquire lock: lock identifier: FsDatasetRwlock waitTimeMs=336 ms.Suppressed 0 
lock wait warnings.Longest supressed waitTimeMs=0.The stack trace is
java.lang.Thread,getStackTrace(Thread.java:1559)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1060)
org.apache.hadoop.util.Instrumentedlock.logWaitWarning(InstrumentedLock.java:171)
org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222)
org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock, iaya:105)
org.apache.hadoop.util.AutocloseableLock.acquire(AutocloseableLock.java:67)
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:1239)
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:230)
org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver 
(DataXceiver.java:1313)
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock 
(DataXceiver.java:764)
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:176)
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:110)
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:293)
java.lang.Thread.run(Thread.java:748)
{code}
 

Then we traced the method 
_org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
 java:1239),_ and print how long each command take to finish the execution, and 
find that it takes us 700ms to get the linkCount of the file which is really 
slow.

 

We traced the code and  find that java1.8 use a Shell Command to get the 
linkCount, in which execution it will start a new Process and wait for the 
Process to fork, when the QPS is very high, it will sometimes take a long time 
to fork the process.

Here is the shell command.

[jira] [Updated] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-02-23 Thread liang yu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang yu updated HADOOP-19052:
--
Attachment: debuglog.png

> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
>Reporter: liang yu
>Priority: Major
>  Labels: pull-request-available
> Attachments: debuglog.png
>
>
> Using Hadoop 3.3.4
>  
> When the QPS of `append` executions is very high, at a rate of above 1/s. 
>  
> We found that the write speed in hadoop is very slow. We traced some 
> datanodes' log and find that there is a warning :
> {code:java}
> 2024-01-26 11:09:44,292 WARN impl.FsDatasetImpl 
> (InstrumentedLock.java:logwaitWarning(165)) Waited above threshold(300 ms) to 
> acquire lock: lock identifier: FsDatasetRwlock waitTimeMs=336 ms.Suppressed 0 
> lock wait warnings.Longest supressed waitTimeMs=0.The stack trace is
> java.lang.Thread,getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1060)
> org.apache.hadoop.util.Instrumentedlock.logWaitWarning(InstrumentedLock.java:171)
> org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222)
> org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock, iaya:105)
> org.apache.hadoop.util.AutocloseableLock.acquire(AutocloseableLock.java:67)
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:1239)
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:230)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver 
> (DataXceiver.java:1313)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock 
> (DataXceiver.java:764)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:176)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:110)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:293)
> java.lang.Thread.run(Thread.java:748)
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file which is 
> really slow.
>  
> We traced the code and  find that java1.8 use a Shell Command to get the 
> linkCount, in which execution it will start a new Process and wait for the 
> Process to fork, when the QPS is very high, it will sometimes take a long 
> time to fork the process.
> Here is the shell command.
> {code:java}
> stat -c%h /path/to/file
> {code}
>  
> Solution:
> For the FileStore that supports the file attributes "unix", we can use the 
> method _Files.getAttribute(f.toPath(), "unix:nlink")_ to get the linkCount, 
> this method doesn't need to start a new process, and will return the result 
> in a very short time.
>  
> When we use this method to get the file linkCount, we rarely get the WARN log 
> above when the QPS of append execution is high.
> .
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time

2024-02-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819971#comment-17819971
 ] 

ASF GitHub Bot commented on HADOOP-19052:
-

yangjiandan commented on code in PR #6527:
URL: https://github.com/apache/hadoop/pull/6527#discussion_r1477765548


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HardLink.java:
##
@@ -220,6 +226,10 @@ public static int getLinkCount(File fileName) throws 
IOException {
   throw new FileNotFoundException(fileName + " not found.");
 }
 
+if 
(Files.getFileStore(fileName.toPath()).supportsFileAttributeView(FileAttributeView))
 {

Review Comment:
   
Files.getFileStore(fileName.toPath()).supportsFileAttributeView(FileAttributeView)
  should be abstracted into a method, and be tested it in UT.



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHardLink.java:
##
@@ -219,6 +230,16 @@ public void testGetLinkCount() throws IOException {
 assertEquals(1, getLinkCount(x3));
   }
 
+  @Test
+public void testGetLinkCountFromFileAttribute() throws IOException {
+assertTrue(supportsHardLink(x1));

Review Comment:
   supportsHardLink(x1) may be false if os is not unix style





> Hadoop use Shell command to get the count of the hard link which takes a lot 
> of time
> 
>
> Key: HADOOP-19052
> URL: https://issues.apache.org/jira/browse/HADOOP-19052
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Hadopp 3.3.4
>Reporter: liang yu
>Priority: Major
>  Labels: pull-request-available
>
> Using Hadoop 3.3.4
>  
> When the QPS of `append` executions is very high, at a rate of above 1/s. 
>  
> We found that the write speed in hadoop is very slow. We traced some 
> datanodes' log and find that there is a warning :
> {code:java}
> 2024-01-26 11:09:44,292 WARN impl.FsDatasetImpl 
> (InstrumentedLock.java:logwaitWarning(165)) Waited above threshold(300 ms) to 
> acquire lock: lock identifier: FsDatasetRwlock waitTimeMs=336 ms.Suppressed 0 
> lock wait warnings.Longest supressed waitTimeMs=0.The stack trace is
> java.lang.Thread,getStackTrace(Thread.java:1559)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1060)
> org.apache.hadoop.util.Instrumentedlock.logWaitWarning(InstrumentedLock.java:171)
> org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222)
> org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock, iaya:105)
> org.apache.hadoop.util.AutocloseableLock.acquire(AutocloseableLock.java:67)
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:1239)
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:230)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver 
> (DataXceiver.java:1313)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock 
> (DataXceiver.java:764)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:176)
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:110)
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:293)
> java.lang.Thread.run(Thread.java:748)
> {code}
>  
> Then we traced the method 
> _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.
>  java:1239),_ and print how long each command take to finish the execution, 
> and find that it takes us 700ms to get the linkCount of the file which is 
> really slow.
>  
> We traced the code and  find that java1.8 use a Shell Command to get the 
> linkCount, in which execution it will start a new Process and wait for the 
> Process to fork, when the QPS is very high, it will sometimes take a long 
> time to fork the process.
> Here is the shell command.
> {code:java}
> stat -c%h /path/to/file
> {code}
>  
> Solution:
> For the FileStore that supports the file attributes "unix", we can use the 
> method _Files.getAttribute(f.toPath(), "unix:nlink")_ to get the linkCount, 
> this method doesn't need to start a new process, and will return the result 
> in a very short time.
>  
> When we use this method to get the file linkCount, we rarely get the WARN log 
> above when the QPS of append execution is high.
> .
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19052.Hadoop use Shell command to get the count of the hard link which takes a lot of time [hadoop]

2024-02-23 Thread via GitHub


yangjiandan commented on code in PR #6527:
URL: https://github.com/apache/hadoop/pull/6527#discussion_r1477765548


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HardLink.java:
##
@@ -220,6 +226,10 @@ public static int getLinkCount(File fileName) throws 
IOException {
   throw new FileNotFoundException(fileName + " not found.");
 }
 
+if 
(Files.getFileStore(fileName.toPath()).supportsFileAttributeView(FileAttributeView))
 {

Review Comment:
   
Files.getFileStore(fileName.toPath()).supportsFileAttributeView(FileAttributeView)
  should be abstracted into a method, and be tested it in UT.



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHardLink.java:
##
@@ -219,6 +230,16 @@ public void testGetLinkCount() throws IOException {
 assertEquals(1, getLinkCount(x3));
   }
 
+  @Test
+public void testGetLinkCountFromFileAttribute() throws IOException {
+assertTrue(supportsHardLink(x1));

Review Comment:
   supportsHardLink(x1) may be false if os is not unix style



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org