[jira] [Commented] (HADOOP-18981) Move oncrpc/portmap from hadoop-nfs to hadoop-common

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802436#comment-17802436
 ] 

ASF GitHub Bot commented on HADOOP-18981:
-

xinglin commented on PR #6280:
URL: https://github.com/apache/hadoop/pull/6280#issuecomment-1876538794

   > Note that as hadoop-common is a provided dependency of hadoop-nfs, with 
explicit use of org.apache.hadoop.util classes in MountdBase, is there any 
deployment/use of the hadoop-nfs module without hadoop-common in the cp?
   
   If this is the case, I guess we can be assured that hadoop-common is always 
available whenever hadoop-nfs is needed/depended. And moving classes around as 
done by this PR won't break these other modules which used to be depended on 
hadoop-nfs module. Am i understanding it correctly?  
   




> Move oncrpc/portmap from hadoop-nfs to hadoop-common
> 
>
> Key: HADOOP-18981
> URL: https://issues.apache.org/jira/browse/HADOOP-18981
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Xing Lin
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
>
> We want to use udpserver/client for other use cases, rather than only for 
> NFS. One such use case is to export NameNodeHAState for NameNodes via a UDP 
> server. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18981. moved oncrpc/portmap from hadoop-common-project/hadoop-nfs to hadoop-common-project/hadoop-common [hadoop]

2024-01-03 Thread via GitHub


xinglin commented on PR #6280:
URL: https://github.com/apache/hadoop/pull/6280#issuecomment-1876538794

   > Note that as hadoop-common is a provided dependency of hadoop-nfs, with 
explicit use of org.apache.hadoop.util classes in MountdBase, is there any 
deployment/use of the hadoop-nfs module without hadoop-common in the cp?
   
   If this is the case, I guess we can be assured that hadoop-common is always 
available whenever hadoop-nfs is needed/depended. And moving classes around as 
done by this PR won't break these other modules which used to be depended on 
hadoop-nfs module. Am i understanding it correctly?  
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18981) Move oncrpc/portmap from hadoop-nfs to hadoop-common

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802413#comment-17802413
 ] 

ASF GitHub Bot commented on HADOOP-18981:
-

xinglin commented on PR #6280:
URL: https://github.com/apache/hadoop/pull/6280#issuecomment-1876492612

   @steveloughran, Thanks for taking a look at this change. I added a couple of 
package-info.java files. First time doing this and not sure whether I am adding 
these files correctly. Could you take a look?
   




> Move oncrpc/portmap from hadoop-nfs to hadoop-common
> 
>
> Key: HADOOP-18981
> URL: https://issues.apache.org/jira/browse/HADOOP-18981
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Xing Lin
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
>
> We want to use udpserver/client for other use cases, rather than only for 
> NFS. One such use case is to export NameNodeHAState for NameNodes via a UDP 
> server. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18981. moved oncrpc/portmap from hadoop-common-project/hadoop-nfs to hadoop-common-project/hadoop-common [hadoop]

2024-01-03 Thread via GitHub


xinglin commented on PR #6280:
URL: https://github.com/apache/hadoop/pull/6280#issuecomment-1876492612

   @steveloughran, Thanks for taking a look at this change. I added a couple of 
package-info.java files. First time doing this and not sure whether I am adding 
these files correctly. Could you take a look?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17322. RetryCache#MAX_CAPACITY seems to be MIN_CAPACITY. [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6405:
URL: https://github.com/apache/hadoop/pull/6405#issuecomment-1876480172

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  16m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 232m  4s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6405/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6405 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d1829842040e 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8b53d10478e5467d854f6ea34f12467da91b7f24 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6405/1/testReport/ |
   | Max. process+thread count | 1931 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6405/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

[jira] [Created] (HADOOP-19023) ITestS3AConcurrentOps#testParallelRename intermittent timeout failure

2024-01-03 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-19023:
-

 Summary: ITestS3AConcurrentOps#testParallelRename intermittent 
timeout failure
 Key: HADOOP-19023
 URL: https://issues.apache.org/jira/browse/HADOOP-19023
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani


Need to configure higher timeout for the test.

 
{code:java}
[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 256.281 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps
[ERROR] 
testParallelRename(org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps)  Time 
elapsed: 72.565 s  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSApiCallTimeoutException: Writing Object on 
fork-0005/test/testParallelRename-source0: 
software.amazon.awssdk.core.exception.ApiCallTimeoutException: Client execution 
did not complete before the specified timeout configuration: 15000 millis
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:214)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:532)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:620)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: software.amazon.awssdk.core.exception.ApiCallTimeoutException: 
Client execution did not complete before the specified timeout configuration: 
15000 millis
at 
software.amazon.awssdk.core.exception.ApiCallTimeoutException$BuilderImpl.build(ApiCallTimeoutException.java:97)
at 
software.amazon.awssdk.core.exception.ApiCallTimeoutException.create(ApiCallTimeoutException.java:38)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.generateApiCallTimeoutException(ApiCallTimeoutTrackingStage.java:151)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.handleInterruptedException(ApiCallTimeoutTrackingStage.java:139)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.translatePipelineException(ApiCallTimeoutTrackingStage.java:107)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:62)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:50)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:32)
at 
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at 
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at 
software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:224)
at 

[jira] [Commented] (HADOOP-18980) S3A credential provider remapping: make extensible

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802396#comment-17802396
 ] 

ASF GitHub Bot commented on HADOOP-18980:
-

virajjasani commented on PR #6406:
URL: https://github.com/apache/hadoop/pull/6406#issuecomment-1876317165

   Tested against `us-west-2`:
   
   
   1.
   ```
   [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
256.281 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps
   [ERROR] 
testParallelRename(org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps)  Time 
elapsed: 72.565 s  <<< ERROR!
   org.apache.hadoop.fs.s3a.AWSApiCallTimeoutException: Writing Object on 
fork-0005/test/testParallelRename-source0: 
software.amazon.awssdk.core.exception.ApiCallTimeoutException: Client execution 
did not complete before the specified timeout configuration: 15000 millis
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:214)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:532)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:620)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
   Caused by: software.amazon.awssdk.core.exception.ApiCallTimeoutException: 
Client execution did not complete before the specified timeout configuration: 
15000 millis
at 
software.amazon.awssdk.core.exception.ApiCallTimeoutException$BuilderImpl.build(ApiCallTimeoutException.java:97)
at 
software.amazon.awssdk.core.exception.ApiCallTimeoutException.create(ApiCallTimeoutException.java:38)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.generateApiCallTimeoutException(ApiCallTimeoutTrackingStage.java:151)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.handleInterruptedException(ApiCallTimeoutTrackingStage.java:139)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.translatePipelineException(ApiCallTimeoutTrackingStage.java:107)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:62)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:50)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:32)
at 
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at 
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at 
software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:224)
at 

Re: [PR] HADOOP-18980. S3A credential provider remapping: make extensible [hadoop]

2024-01-03 Thread via GitHub


virajjasani commented on PR #6406:
URL: https://github.com/apache/hadoop/pull/6406#issuecomment-1876317165

   Tested against `us-west-2`:
   
   
   1.
   ```
   [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
256.281 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps
   [ERROR] 
testParallelRename(org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps)  Time 
elapsed: 72.565 s  <<< ERROR!
   org.apache.hadoop.fs.s3a.AWSApiCallTimeoutException: Writing Object on 
fork-0005/test/testParallelRename-source0: 
software.amazon.awssdk.core.exception.ApiCallTimeoutException: Client execution 
did not complete before the specified timeout configuration: 15000 millis
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:214)
at 
org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:532)
at 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:620)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
   Caused by: software.amazon.awssdk.core.exception.ApiCallTimeoutException: 
Client execution did not complete before the specified timeout configuration: 
15000 millis
at 
software.amazon.awssdk.core.exception.ApiCallTimeoutException$BuilderImpl.build(ApiCallTimeoutException.java:97)
at 
software.amazon.awssdk.core.exception.ApiCallTimeoutException.create(ApiCallTimeoutException.java:38)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.generateApiCallTimeoutException(ApiCallTimeoutTrackingStage.java:151)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.handleInterruptedException(ApiCallTimeoutTrackingStage.java:139)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.translatePipelineException(ApiCallTimeoutTrackingStage.java:107)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:62)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:50)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:32)
at 
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at 
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at 
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at 
software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:224)
at 
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103)
at 
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:173)
at 

[jira] [Commented] (HADOOP-19022) ITestS3AConfiguration#testRequestTimeout failure

2024-01-03 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802395#comment-17802395
 ] 

Viraj Jasani commented on HADOOP-19022:
---

It's small test, but perhaps good to cover both cases: more than 15s and less 
than 15s timeouts.

> ITestS3AConfiguration#testRequestTimeout failure
> 
>
> Key: HADOOP-19022
> URL: https://issues.apache.org/jira/browse/HADOOP-19022
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Priority: Minor
>
> "fs.s3a.connection.request.timeout" should be specified in milliseconds as per
> {code:java}
> Duration apiCallTimeout = getDuration(conf, REQUEST_TIMEOUT,
> DEFAULT_REQUEST_TIMEOUT_DURATION, TimeUnit.MILLISECONDS, Duration.ZERO); 
> {code}
> The test fails consistently because it sets 120 ms timeout which is less than 
> 15s (min network operation duration), and hence gets reset to 15000 ms based 
> on the enforcement.
>  
> {code:java}
> [ERROR] testRequestTimeout(org.apache.hadoop.fs.s3a.ITestS3AConfiguration)  
> Time elapsed: 0.016 s  <<< FAILURE!
> java.lang.AssertionError: Configured fs.s3a.connection.request.timeout is 
> different than what AWS sdk configuration uses internally expected:<12> 
> but was:<15000>
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.failNotEquals(Assert.java:835)
>   at org.junit.Assert.assertEquals(Assert.java:647)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AConfiguration.testRequestTimeout(ITestS3AConfiguration.java:444)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19022) ITestS3AConfiguration#testRequestTimeout failure

2024-01-03 Thread Viraj Jasani (Jira)
Viraj Jasani created HADOOP-19022:
-

 Summary: ITestS3AConfiguration#testRequestTimeout failure
 Key: HADOOP-19022
 URL: https://issues.apache.org/jira/browse/HADOOP-19022
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Viraj Jasani


"fs.s3a.connection.request.timeout" should be specified in milliseconds as per
{code:java}
Duration apiCallTimeout = getDuration(conf, REQUEST_TIMEOUT,
DEFAULT_REQUEST_TIMEOUT_DURATION, TimeUnit.MILLISECONDS, Duration.ZERO); 
{code}
The test fails consistently because it sets 120 ms timeout which is less than 
15s (min network operation duration), and hence gets reset to 15000 ms based on 
the enforcement.

 
{code:java}
[ERROR] testRequestTimeout(org.apache.hadoop.fs.s3a.ITestS3AConfiguration)  
Time elapsed: 0.016 s  <<< FAILURE!
java.lang.AssertionError: Configured fs.s3a.connection.request.timeout is 
different than what AWS sdk configuration uses internally expected:<12> but 
was:<15000>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at 
org.apache.hadoop.fs.s3a.ITestS3AConfiguration.testRequestTimeout(ITestS3AConfiguration.java:444)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-18980. S3A credential provider remapping: make extensible [hadoop]

2024-01-03 Thread via GitHub


virajjasani opened a new pull request, #6406:
URL: https://github.com/apache/hadoop/pull/6406

   Jira: HADOOP-18980


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17320. seekToNewSource uses ignoredNodes to get a new node other than the current node. [hadoop]

2024-01-03 Thread via GitHub


KeeProMise commented on code in PR #6403:
URL: https://github.com/apache/hadoop/pull/6403#discussion_r1441264235


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java:
##
@@ -1647,16 +1661,8 @@ public synchronized boolean seekToNewSource(long 
targetPos)
 if (currentNode == null) {
   return seekToBlockSource(targetPos);
 }
-boolean markedDead = dfsClient.isDeadNode(this, currentNode);
-addToLocalDeadNodes(currentNode);

Review Comment:
   hi @hfutatzhanghb, In all places where getLocal DeadNodes is used, currently 
only LocatedBlocksRefresher#refreshBlockLocations() is not synchronized. 
However, this method ( refreshBlockLocations() )determines whether the local 
deadnodes are empty. If not, the deadnodes will eventually be cleared. 
   Therefore, not adding the current node to local dead nodes will not affect 
existing functions and is a safe operation. In addition, judging from the role 
of the seekToNewSource method, this method is to obtain a new data node to 
replace the current node. Therefore, the original method relies on local dead 
nodes, but is highly coupled with the dead nodes detector. This is also the 
reason why I proposed this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17320. seekToNewSource uses ignoredNodes to get a new node other than the current node. [hadoop]

2024-01-03 Thread via GitHub


hfutatzhanghb commented on code in PR #6403:
URL: https://github.com/apache/hadoop/pull/6403#discussion_r1441251384


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java:
##
@@ -1647,16 +1661,8 @@ public synchronized boolean seekToNewSource(long 
targetPos)
 if (currentNode == null) {
   return seekToBlockSource(targetPos);
 }
-boolean markedDead = dfsClient.isDeadNode(this, currentNode);
-addToLocalDeadNodes(currentNode);

Review Comment:
   Hi @KeeProMise , Let's think about below situation in original code:
   we add currentNode to DFSInputStream#deadNodes,  this field has a getter 
method getLocalDeadNodes() which may be called by DFSClient and 
DFSInputStream's non-synchronized method.
   But, your modification remove the logic of adding currentNode to 
DFSInputStream#deadNodes. The caller of getLocalDeadNodes() will be no aware of 
changing.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17320. seekToNewSource uses ignoredNodes to get a new node other than the current node. [hadoop]

2024-01-03 Thread via GitHub


KeeProMise commented on code in PR #6403:
URL: https://github.com/apache/hadoop/pull/6403#discussion_r1441243319


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java:
##
@@ -1647,16 +1661,8 @@ public synchronized boolean seekToNewSource(long 
targetPos)
 if (currentNode == null) {
   return seekToBlockSource(targetPos);
 }
-boolean markedDead = dfsClient.isDeadNode(this, currentNode);
-addToLocalDeadNodes(currentNode);

Review Comment:
   @hfutatzhanghb Thanks for your review, seekToNewSource is a synchronous 
method, so the method is atomic; and the method turns out to just add the 
current node to the local deadnodes; so I think it will not affect the dead 
node detector.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17320. seekToNewSource uses ignoredNodes to get a new node other than the current node. [hadoop]

2024-01-03 Thread via GitHub


KeeProMise commented on code in PR #6403:
URL: https://github.com/apache/hadoop/pull/6403#discussion_r1441243319


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java:
##
@@ -1647,16 +1661,8 @@ public synchronized boolean seekToNewSource(long 
targetPos)
 if (currentNode == null) {
   return seekToBlockSource(targetPos);
 }
-boolean markedDead = dfsClient.isDeadNode(this, currentNode);
-addToLocalDeadNodes(currentNode);

Review Comment:
   Thanks for your review, seekToNewSource is a synchronous method, so the 
method is atomic; and the method turns out to just add the current node to the 
local deadnodes; so I think it will not affect the dead node detector.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HDFS-17322. RetryCache#MAX_CAPACITY seems to be MIN_CAPACITY. [hadoop]

2024-01-03 Thread via GitHub


hfutatzhanghb opened a new pull request, #6405:
URL: https://github.com/apache/hadoop/pull/6405

   ### Description of PR
   From the code logic, we can infer that RetryCache#MAX_CAPACITY should  
better be  MIN_CAPACITY.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17321. RBF: Add RouterAutoMsyncService for auto msync in Router [hadoop]

2024-01-03 Thread via GitHub


LiuGuH commented on PR #6404:
URL: https://github.com/apache/hadoop/pull/6404#issuecomment-1876223751

   > I'm wondering if we can instead fix the existing mechanism such that only 
a single read is sent to the active, vs. adding a new mechanism.
   Yes, it can. But for only a single read is send to the active, we should add 
synchronized. And this maybe have performance impact.  Add a seperate 
RouterAutoMsyncService maybe a  way to slove it.
   
   > Additionally, the periodic redirection of calls to the active only happens 
in the case when there are no calls going to the active already so having some 
reads be sent to the active should not overload it.
   In most cases, that's true.  But I think add RouterAutoMsyncService will be 
more robustness. Thanks
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17306. RBF: Router should not return nameservices that does not enable observer nodes in RpcResponseHeaderProto [hadoop]

2024-01-03 Thread via GitHub


LiuGuH commented on PR #6385:
URL: https://github.com/apache/hadoop/pull/6385#issuecomment-1876216428

   > I'm okay with saving this optimization for a separate pull request though.
   
   Thanks for review . And I will be add a separate pull request  for 
DFS_ROUTER_OBSERVER_READ_DEFAULT_KEY  and DFS_ROUTER_OBSERVER_READ_DEFAULT_KEY 
check. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17310. DiskBalancer: Enhance the log message for submitPlan [hadoop]

2024-01-03 Thread via GitHub


haiyang1987 commented on PR #6391:
URL: https://github.com/apache/hadoop/pull/6391#issuecomment-1876205914

   Thanks @slfan1989 @tasanuma @ashutoshcipher for your review and merge it!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17304. Update fsck -blockId to display slownode status of blocks. [hadoop]

2024-01-03 Thread via GitHub


huangzhaobo99 commented on PR #6384:
URL: https://github.com/apache/hadoop/pull/6384#issuecomment-1876204046

   @slfan1989 Can this change be pushed forward when you have free time ? Thank 
you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


xuzifu666 commented on PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#issuecomment-1876194790

   @ayushtkn https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/7/ 
 UT seems all success,but the result is not all passed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19019) Parallel Maven Build Support for Apache Hadoop

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802353#comment-17802353
 ] 

ASF GitHub Bot commented on HADOOP-19019:
-

JiaLiangC commented on PR #6373:
URL: https://github.com/apache/hadoop/pull/6373#issuecomment-1876166196

   @Hexiaoqiao 
   Test environment: CentOS 8 x86_64, 16GB RAM, SSD.
   Tested on Hadoop 3.3.6.
   The initial serial compilation took almost 3 hours due to slow dependency 
downloads. With parallel compilation (-2C), the initial compilation took about 
1 hour, approximately 2 times faster.
   For subsequent compilations, with dependencies already downloaded locally, 
the overall parallel compilation time for Hadoop was 13 minutes, while serial 
compilation took 37 minutes.




> Parallel Maven Build Support for Apache Hadoop
> --
>
> Key: HADOOP-19019
> URL: https://issues.apache.org/jira/browse/HADOOP-19019
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: caijialiang
>Priority: Major
>  Labels: pull-request-available
> Attachments: patch11-HDFS-17287.diff
>
>
> The reason for the slow compilation: The Hadoop project has many modules, and 
> the inability to compile them in parallel results in a slow process. For 
> instance, the first compilation of Hadoop might take several hours, and even 
> with local Maven dependencies, a subsequent compilation can still take close 
> to 40 minutes, which is very slow.
> How to solve it: Use {{mvn dependency:tree}} and {{maven-to-plantuml}} to 
> investigate the dependency issues that prevent parallel compilation.
>  * Investigate the dependencies between project modules.
>  * Analyze the dependencies in multi-module Maven projects.
>  * Download {{{}maven-to-plantuml{}}}:
>  
> {{wget 
> [https://github.com/phxql/maven-to-plantuml/releases/download/v1.0/maven-to-plantuml-1.0.jar]}}
>  * Generate a dependency tree:
>  
> {{mvn dependency:tree > dep.txt}}
>  * Generate a UML diagram from the dependency tree:
>  
> {{java -jar maven-to-plantuml.jar --input dep.txt --output dep.puml}}
> For more information, visit: [maven-to-plantuml GitHub 
> repository|https://github.com/phxql/maven-to-plantuml/tree/master].
>  
> *Hadoop Parallel Compilation Submission Logic*
>  # Reasons for Parallel Compilation Failure
>  * 
>  ** In sequential compilation, as modules are compiled one by one in order, 
> there are no errors because the compilation follows the module sequence.
>  ** However, in parallel compilation, all modules are compiled 
> simultaneously. The compilation order during multi-module concurrent 
> compilation depends on the inter-module dependencies. If Module A depends on 
> Module B, then Module B will be compiled before Module A. This ensures that 
> the compilation order follows the dependencies between modules.
> But when Hadoop compiles in parallel, for example, compiling 
> {{{}hadoop-yarn-project{}}}, the dependencies between modules are correct. 
> The issue arises during the dist package stage. {{dist}} packages all other 
> compiled modules.
> *Behavior of {{hadoop-yarn-project}} in Serial Compilation:*
>  * 
>  ** In serial compilation, it compiles modules in the pom one by one in 
> sequence. After all modules are compiled, it compiles 
> {{{}hadoop-yarn-project{}}}. During the {{prepare-package}} stage, the 
> {{maven-assembly-plugin}} plugin is executed for packaging. All packages are 
> repackaged according to the description in 
> {{{}hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml{}}}.
> *Behavior of {{hadoop-yarn-project}} in Parallel Compilation:*
>  * 
>  ** Parallel compilation compiles modules according to the dependency order 
> among them. If modules do not declare dependencies on each other through 
> {{{}dependency{}}}, they are compiled in parallel. According to the 
> dependency definition in the pom of {{{}hadoop-yarn-project{}}}, the 
> dependencies are compiled first, followed by {{{}hadoop-yarn-project{}}}, 
> executing its {{{}maven-assembly-plugin{}}}.
>  ** However, the files needed for packaging in 
> {{hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml}} are 
> not all included in the {{dependency}} of {{{}hadoop-yarn-project{}}}. 
> Therefore, when compiling {{hadoop-yarn-project}} and executing 
> {{{}maven-assembly-plugin{}}}, not all required modules are built yet, 
> leading to errors in parallel compilation.
> *Solution:*
>  * 
>  ** The solution is relatively straightforward: organize all modules from 
> {{{}hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml{}}}, 
> and then declare them as dependencies in the pom of 
> {{{}hadoop-yarn-project{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] HADOOP-19019: Parallel Maven Build Support for Apache Hadoop [hadoop]

2024-01-03 Thread via GitHub


JiaLiangC commented on PR #6373:
URL: https://github.com/apache/hadoop/pull/6373#issuecomment-1876166196

   @Hexiaoqiao 
   Test environment: CentOS 8 x86_64, 16GB RAM, SSD.
   Tested on Hadoop 3.3.6.
   The initial serial compilation took almost 3 hours due to slow dependency 
downloads. With parallel compilation (-2C), the initial compilation took about 
1 hour, approximately 2 times faster.
   For subsequent compilations, with dependencies already downloaded locally, 
the overall parallel compilation time for Hadoop was 13 minutes, while serial 
compilation took 37 minutes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19015) Increase fs.s3a.connection.maximum to 500 to minimize risk of Timeout waiting for connection from pool

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802350#comment-17802350
 ] 

ASF GitHub Bot commented on HADOOP-19015:
-

mukund-thakur commented on PR #6372:
URL: https://github.com/apache/hadoop/pull/6372#issuecomment-1876126039

   Yetus failure because of no tests added. 
   Re-ran all the tests using us-west-1 and all is good.




> Increase fs.s3a.connection.maximum to 500 to minimize risk of Timeout waiting 
> for connection from pool
> --
>
> Key: HADOOP-19015
> URL: https://issues.apache.org/jira/browse/HADOOP-19015
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>
> Getting errors in jobs which can be fixed by increasing this 
> 2023-12-14 17:35:56,602 [ERROR] [TezChild] |tez.TezProcessor|: 
> java.lang.RuntimeException: java.io.IOException: 
> org.apache.hadoop.net.ConnectTimeoutException: getFileStatus on 
> s3a://aaa/cc-hive-jzv5y6/warehouse/tablespace/managed/hive/student/delete_delta_012_012_0001/bucket_1_0:
>  software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
> HTTP request: Timeout waiting for connection from pool   at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
>   at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:152)
>   at 
> org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:437)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:297)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:280)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:84)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:70)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:70)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:40)
>   at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptible



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19015. Increase fs.s3a.connection.maximum to 500 to minimize risk of Timeout waiting for connection from pool. [hadoop]

2024-01-03 Thread via GitHub


mukund-thakur commented on PR #6372:
URL: https://github.com/apache/hadoop/pull/6372#issuecomment-1876126039

   Yetus failure because of no tests added. 
   Re-ran all the tests using us-west-1 and all is good.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11115. Add configuration to globally disable AM preemption for capacity scheduler [hadoop]

2024-01-03 Thread via GitHub


ashutoshcipher closed pull request #4377: YARN-5. Add configuration to 
globally disable AM preemption for capacity scheduler
URL: https://github.com/apache/hadoop/pull/4377


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17321. RBF: Add RouterAutoMsyncService for auto msync in Router [hadoop]

2024-01-03 Thread via GitHub


simbadzina commented on PR #6404:
URL: https://github.com/apache/hadoop/pull/6404#issuecomment-1876005260

   @LiuGuH can you expand more on the following
   > And [HDFS-16890](https://issues.apache.org/jira/browse/HDFS-16890) maybe 
lead to many read requests into active NN at the same time.
   
   I see how multiple concurrent calls can read the same `false` value from 
`isNamespaceStateIdFresh(nsId)` before the accumulator is updated, but the 
windows for these reads should be very small. I'm wondering if we can instead 
fix the existing mechanism such that only a single read is sent to the active, 
vs. adding a new mechanism.
   
   Additionally, the periodic redirection of calls to the active only happens 
in the case when there are no calls going to the active already so having some 
reads be sent to the active should not overload it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#issuecomment-1875821498

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   3m 27s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/7/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  40m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 255m 27s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 427m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6402 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux a3ea2ef419b8 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eba716aec26d9623dbaea7f4f07b50830bf03a79 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/7/testReport/ |
   | Max. process+thread count | 2959 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This 

Re: [PR] HDFS-17314. Add a metrics to record congestion backoff counts. [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6398:
URL: https://github.com/apache/hadoop/pull/6398#issuecomment-1875821500

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   3m 16s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  34m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 213m 44s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 351m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6398 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 634fd6cc9d8c 5.15.0-86-generic #96-Ubuntu SMP Wed Sep 20 
08:23:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 95958c2e178949d6a5f06aecbf402ca3a402bdcf |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/4/testReport/ |
   | Max. process+thread count | 3741 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to 

Re: [PR] YARN-7953. [BackPort] [GQ] Data structures for federation global queues calculations. [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6361:
URL: https://github.com/apache/hadoop/pull/6361#issuecomment-1875785768

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  jsonlint  |   0m  1s |  |  jsonlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  42m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  0s |  |  
hadoop-yarn-server-globalpolicygenerator in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 146m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6361/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6361 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle 
jsonlint |
   | uname | Linux 3b6ff6a4f952 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3f07e571c1cfb821723128935b33e29ba7234abd |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6361/5/testReport/ |
   | Max. process+thread count | 541 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6361/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an 

Re: [PR] YARN-11631. [GPG] Add GPGWebServices. [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6354:
URL: https://github.com/apache/hadoop/pull/6354#issuecomment-1875771769

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 13s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6354/7/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator:
 The patch generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 56s |  |  
hadoop-yarn-server-globalpolicygenerator in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 134m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6354/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6354 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 9ca3f1dcb3c4 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8abadf2a2fa670e519a8827e8b1c75aad1d90211 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6354/7/testReport/ |
   | Max. process+thread count | 590 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator
 U: 

Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#issuecomment-1875756070

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m 51s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  24m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 281m 49s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 379m 13s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
   |   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
   |   | hadoop.hdfs.TestDistributedFileSystem |
   |   | hadoop.hdfs.TestDatanodeDeath |
   |   | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
   |   | hadoop.hdfs.TestRenameWhileOpen |
   |   | hadoop.hdfs.TestDistributedFileSystemWithECFile |
   |   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.server.namenode.TestHostsFiles |
   |   | hadoop.hdfs.TestDFSStripedInputStream |
   |   | hadoop.hdfs.TestEncryptionZonesWithKMS |
   |   | hadoop.hdfs.TestDFSInputStreamBlockLocations |
   |   | hadoop.hdfs.TestDFSStripedOutputStream |
   |   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes |
   

Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#issuecomment-1875697807

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  32m 50s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/5/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m 52s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/5/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  23m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 228m 34s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 24s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/5/artifact/out/results-asflicense.txt)
 |  The patch generated 39 ASF License warnings.  |
   |  |   | 324m 43s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
   |   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.TestPread |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.server.datanode.TestDiskError |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
   |   | hadoop.hdfs.server.datanode.TestDataNodeReconfiguration |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
   |   | hadoop.hdfs.TestSafeModeWithStripedFile |
   |   | 
hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks |
   |   | hadoop.hdfs.TestDFSStripedOutputStream |
   |   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
   |   | hadoop.hdfs.TestDFSStorageStateRecovery |
   |   | 

Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#issuecomment-1875697478

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m 51s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/3/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  23m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 253m 28s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 346m 55s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
   |   | hadoop.hdfs.TestEncryptedTransfer |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | 
hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithInProgressTailing |
   |   | hadoop.hdfs.TestDFSStripedInputStream |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
   |   | hadoop.hdfs.TestDFSStripedOutputStream |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
   |   | hadoop.hdfs.server.namenode.ha.TestHAMetrics |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
   |   | hadoop.hdfs.TestMaintenanceState |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes 

Re: [PR] YARN-11638. [GPG] GPG Support CLI. [hadoop]

2024-01-03 Thread via GitHub


slfan1989 commented on PR #6396:
URL: https://github.com/apache/hadoop/pull/6396#issuecomment-1875628978

   @goiri Can you help review this PR? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17182. DataSetLockManager.lockLeakCheck() is not thread-safe. [hadoop]

2024-01-03 Thread via GitHub


slfan1989 commented on PR #6029:
URL: https://github.com/apache/hadoop/pull/6029#issuecomment-1875626968

   @LiuGuH Thank you for your contribution! When we submit a PR, we need to 
write the JIRA number into the commit message.
   
   
![image](https://github.com/apache/hadoop/assets/55643692/b8f4a159-c04c-4a2f-9a21-8c89d4ce746c)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#issuecomment-1875625521

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  32m 45s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/6/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m 41s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/6/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  20m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 183m  9s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 270m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6402 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 47736f184c4c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eba716aec26d9623dbaea7f4f07b50830bf03a79 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/6/testReport/ |
   | Max. process+thread count | 4397 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/6/console |

Re: [PR] HDFS-17310. DiskBalancer: Enhance the log message for submitPlan [hadoop]

2024-01-03 Thread via GitHub


slfan1989 commented on PR #6391:
URL: https://github.com/apache/hadoop/pull/6391#issuecomment-1875613663

   @haiyang1987 Thank you for your contribution! Merged Into trunk. 
@ashutoshcipher @tasanuma Thanks for reviewing the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17310. DiskBalancer: Enhance the log message for submitPlan [hadoop]

2024-01-03 Thread via GitHub


slfan1989 merged PR #6391:
URL: https://github.com/apache/hadoop/pull/6391


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-7953. [BackPort] [GQ] Data structures for federation global queues calculations. [hadoop]

2024-01-03 Thread via GitHub


slfan1989 commented on PR #6361:
URL: https://github.com/apache/hadoop/pull/6361#issuecomment-1875590456

   @goiri Can you help review this PR? Thank you very much! I will continue to 
follow up on [YARN-7402](https://issues.apache.org/jira/browse/YARN-7402).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17320. seekToNewSource uses ignoredNodes to get a new node other than the current node. [hadoop]

2024-01-03 Thread via GitHub


hfutatzhanghb commented on code in PR #6403:
URL: https://github.com/apache/hadoop/pull/6403#discussion_r1440611536


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java:
##
@@ -1647,16 +1661,8 @@ public synchronized boolean seekToNewSource(long 
targetPos)
 if (currentNode == null) {
   return seekToBlockSource(targetPos);
 }
-boolean markedDead = dfsClient.isDeadNode(this, currentNode);
-addToLocalDeadNodes(currentNode);

Review Comment:
   Hi, @KeeProMise. The idea of this PR make sense to me. Leave some comments. 
Try to confirm whether we should delete those codes. I think we can not delete 
because this will destory dead node detector model. Should be like :
   
   ```java
 boolean markedDead = dfsClient.isDeadNode(this, currentNode);
 addToLocalDeadNodes(currentNode);
 if (!markedDead) {
 /* remove it from deadNodes. blockSeekTo could have cleared
  * deadNodes and added currentNode again. Thats ok. */
 removeFromLocalDeadNodes(oldNode);
   }
 DatanodeInfo newNode = blockSeekTo(targetPos, 
Collections.singletonList(currentNode));
 if (!currentNode.getDatanodeUuid().equals(newNode.getDatanodeUuid())) {
 xxx
 }
   ```
   What your opinions?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11631. [GPG] Add GPGWebServices. [hadoop]

2024-01-03 Thread via GitHub


slfan1989 commented on PR #6354:
URL: https://github.com/apache/hadoop/pull/6354#issuecomment-1875589582

   @goiri Can you help review this PR? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17321. RBF: Add RouterAutoMsyncService for auto msync in Router [hadoop]

2024-01-03 Thread via GitHub


slfan1989 commented on code in PR #6404:
URL: https://github.com/apache/hadoop/pull/6404#discussion_r1440611314


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAutoMsyncService.java:
##
@@ -0,0 +1,86 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import static 
org.apache.hadoop.hdfs.server.federation.FederationTestUtils.NAMENODES;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestName;
+
+import java.io.IOException;
+
+/**
+ * Test the service that msync to all nameservices.
+ */
+public class TestRouterAutoMsyncService {
+
+  private static MiniRouterDFSCluster cluster;
+  private static Router router;
+  private static RouterAutoMsyncService service;
+  private static long msyncInterval = 1000;
+
+  @Rule
+  public TestName name = new TestName();
+
+  @BeforeClass
+  public static void globalSetUp() throws Exception {
+Configuration conf = new Configuration();
+conf.setBoolean(RBFConfigKeys.DFS_ROUTER_AUTO_MSYNC_ENABLE, true);
+conf.setLong(RBFConfigKeys.DFS_ROUTER_AUTO_MSYNC_INTERVAL_MS, 
msyncInterval);
+conf.setBoolean(RBFConfigKeys.DFS_ROUTER_OBSERVER_READ_DEFAULT_KEY, true);
+
+cluster = new MiniRouterDFSCluster(true, 1, conf);
+
+// Start NNs and DNs and wait until ready
+cluster.startCluster(conf);
+cluster.startRouters();
+cluster.waitClusterUp();
+
+// Making one Namenodes active per nameservice
+if (cluster.isHighAvailability()) {
+  for (String ns : cluster.getNameservices()) {
+cluster.switchToActive(ns, NAMENODES[0]);
+cluster.switchToStandby(ns, NAMENODES[1]);
+  }
+}
+cluster.waitActiveNamespaces();
+
+router = cluster.getRandomRouter().getRouter();
+service = router.getRouterAutoMsyncService();
+  }
+
+  @AfterClass
+  public static void tearDown() throws IOException {
+cluster.shutdown();
+service.stop();
+service.close();
+  }
+
+  @Test
+  public void testMsync() throws InterruptedException, IOException {
+Thread.sleep(msyncInterval);

Review Comment:
   In unit tests, we'd better not use sleep.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17321. RBF: Add RouterAutoMsyncService for auto msync in Router [hadoop]

2024-01-03 Thread via GitHub


slfan1989 commented on PR #6404:
URL: https://github.com/apache/hadoop/pull/6404#issuecomment-1875585566

   @simbadzina Can you help review this PR? Thank you very much! I see that 
`HDFS-16890` and `HDFS-17027` were both completed by you. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11638. [GPG] GPG Support CLI. [hadoop]

2024-01-03 Thread via GitHub


slfan1989 commented on PR #6396:
URL: https://github.com/apache/hadoop/pull/6396#issuecomment-1875561860

   @goiri Can you help review this PR? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17321. RBF: Add RouterAutoMsyncService for auto msync in Router [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6404:
URL: https://github.com/apache/hadoop/pull/6404#issuecomment-1875515543

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 20s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  99m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6404/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6404 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 715c46e9e3f8 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f8183f83bbf556ca248f755e7c364881b4f5275d |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6404/1/testReport/ |
   | Max. process+thread count | 2620 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6404/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For 

Re: [PR] HADOOP-18184. S3A Prefetching unbuffer. [hadoop]

2024-01-03 Thread via GitHub


steveloughran commented on PR #5832:
URL: https://github.com/apache/hadoop/pull/5832#issuecomment-1875498186

   ...just catching up on this; not ready for merge


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18184) s3a prefetching stream to support unbuffer()

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802207#comment-17802207
 ] 

ASF GitHub Bot commented on HADOOP-18184:
-

steveloughran commented on PR #5832:
URL: https://github.com/apache/hadoop/pull/5832#issuecomment-1875498186

   ...just catching up on this; not ready for merge




> s3a prefetching stream to support unbuffer()
> 
>
> Key: HADOOP-18184
> URL: https://issues.apache.org/jira/browse/HADOOP-18184
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> Apache Impala uses unbuffer() to free up all client side resources held by a 
> stream, so allowing it to have a map of available (path -> stream) objects, 
> retained across queries.
> This saves on having to reopen the files, with the cost of HEAD checks etc. 
> S3AInputStream just closes its http connection. here there is a lot more 
> state to discard, but all memory and file storage must be freed.
> until this done, ITestS3AContractUnbuffer must skip when the prefetch stream 
> is used.
> its notable that the other tests don't fail, even though the stream doesn't 
> implement the interface; the graceful degradation handles that. it should 
> fail if the test xml resource says the stream does it, but that the stream 
> capabilities say it doesn't.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17320. seekToNewSource uses ignoredNodes to get a new node other than the current node. [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6403:
URL: https://github.com/apache/hadoop/pull/6403#issuecomment-1875485613

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 30s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 149m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6403/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6403 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 47a14d2e160c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5f91a53b2511a836685f2e0f4c44d9554d8e09f4 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6403/1/testReport/ |
   | Max. process+thread count | 568 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6403/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

Re: [PR] YARN-11638. [GPG] GPG Support CLI. [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6396:
URL: https://github.com/apache/hadoop/pull/6396#issuecomment-1875410714

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   3m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  3s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   6m 24s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6396/2/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html)
 |  hadoop-yarn-project/hadoop-yarn in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  19m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m 10s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 208m  8s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6396/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn in the patch passed.  |
   | +1 :green_heart: |  unit  |  25m 26s |  |  hadoop-yarn-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 353m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6396/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6396 |
   | Optional Tests | dupname asflicense codespell detsecrets shellcheck 
shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 8123c05bd4d3 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5a296c799c42856b1fe69fba8350e1ef70e19f85 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 

Re: [PR] HADOOP-19019: Parallel Maven Build Support for Apache Hadoop [hadoop]

2024-01-03 Thread via GitHub


Hexiaoqiao commented on PR #6373:
URL: https://github.com/apache/hadoop/pull/6373#issuecomment-1875368262

   @JiaLiangC Thanks for your work and involve me here. It is very interesting 
improvement. I want to know if any time cost save when change to parallel 
build. Another side, beside hadoop-yarn module, any other modules need to set 
dependency explicitly? Thanks again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19019) Parallel Maven Build Support for Apache Hadoop

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802173#comment-17802173
 ] 

ASF GitHub Bot commented on HADOOP-19019:
-

Hexiaoqiao commented on PR #6373:
URL: https://github.com/apache/hadoop/pull/6373#issuecomment-1875368262

   @JiaLiangC Thanks for your work and involve me here. It is very interesting 
improvement. I want to know if any time cost save when change to parallel 
build. Another side, beside hadoop-yarn module, any other modules need to set 
dependency explicitly? Thanks again.




> Parallel Maven Build Support for Apache Hadoop
> --
>
> Key: HADOOP-19019
> URL: https://issues.apache.org/jira/browse/HADOOP-19019
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: caijialiang
>Priority: Major
>  Labels: pull-request-available
> Attachments: patch11-HDFS-17287.diff
>
>
> The reason for the slow compilation: The Hadoop project has many modules, and 
> the inability to compile them in parallel results in a slow process. For 
> instance, the first compilation of Hadoop might take several hours, and even 
> with local Maven dependencies, a subsequent compilation can still take close 
> to 40 minutes, which is very slow.
> How to solve it: Use {{mvn dependency:tree}} and {{maven-to-plantuml}} to 
> investigate the dependency issues that prevent parallel compilation.
>  * Investigate the dependencies between project modules.
>  * Analyze the dependencies in multi-module Maven projects.
>  * Download {{{}maven-to-plantuml{}}}:
>  
> {{wget 
> [https://github.com/phxql/maven-to-plantuml/releases/download/v1.0/maven-to-plantuml-1.0.jar]}}
>  * Generate a dependency tree:
>  
> {{mvn dependency:tree > dep.txt}}
>  * Generate a UML diagram from the dependency tree:
>  
> {{java -jar maven-to-plantuml.jar --input dep.txt --output dep.puml}}
> For more information, visit: [maven-to-plantuml GitHub 
> repository|https://github.com/phxql/maven-to-plantuml/tree/master].
>  
> *Hadoop Parallel Compilation Submission Logic*
>  # Reasons for Parallel Compilation Failure
>  * 
>  ** In sequential compilation, as modules are compiled one by one in order, 
> there are no errors because the compilation follows the module sequence.
>  ** However, in parallel compilation, all modules are compiled 
> simultaneously. The compilation order during multi-module concurrent 
> compilation depends on the inter-module dependencies. If Module A depends on 
> Module B, then Module B will be compiled before Module A. This ensures that 
> the compilation order follows the dependencies between modules.
> But when Hadoop compiles in parallel, for example, compiling 
> {{{}hadoop-yarn-project{}}}, the dependencies between modules are correct. 
> The issue arises during the dist package stage. {{dist}} packages all other 
> compiled modules.
> *Behavior of {{hadoop-yarn-project}} in Serial Compilation:*
>  * 
>  ** In serial compilation, it compiles modules in the pom one by one in 
> sequence. After all modules are compiled, it compiles 
> {{{}hadoop-yarn-project{}}}. During the {{prepare-package}} stage, the 
> {{maven-assembly-plugin}} plugin is executed for packaging. All packages are 
> repackaged according to the description in 
> {{{}hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml{}}}.
> *Behavior of {{hadoop-yarn-project}} in Parallel Compilation:*
>  * 
>  ** Parallel compilation compiles modules according to the dependency order 
> among them. If modules do not declare dependencies on each other through 
> {{{}dependency{}}}, they are compiled in parallel. According to the 
> dependency definition in the pom of {{{}hadoop-yarn-project{}}}, the 
> dependencies are compiled first, followed by {{{}hadoop-yarn-project{}}}, 
> executing its {{{}maven-assembly-plugin{}}}.
>  ** However, the files needed for packaging in 
> {{hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml}} are 
> not all included in the {{dependency}} of {{{}hadoop-yarn-project{}}}. 
> Therefore, when compiling {{hadoop-yarn-project}} and executing 
> {{{}maven-assembly-plugin{}}}, not all required modules are built yet, 
> leading to errors in parallel compilation.
> *Solution:*
>  * 
>  ** The solution is relatively straightforward: organize all modules from 
> {{{}hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml{}}}, 
> and then declare them as dependencies in the pom of 
> {{{}hadoop-yarn-project{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HDFS-17321. RBF: Add RouterAutoMsyncService for auto msync in Router [hadoop]

2024-01-03 Thread via GitHub


LiuGuH opened a new pull request, #6404:
URL: https://github.com/apache/hadoop/pull/6404

   
   
   ### Description of PR
   Router should have the ability to to auto msync to a nameservice.  And it 
can ensure router periodically refreshes its record of a namespace's state.   
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17182. DataSetLockManager.lockLeakCheck() is not thread-safe. [hadoop]

2024-01-03 Thread via GitHub


Hexiaoqiao commented on PR #6029:
URL: https://github.com/apache/hadoop/pull/6029#issuecomment-1875355078

   Committed to trunk. Thanks @LiuGuH for your contribution.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17182. DataSetLockManager.lockLeakCheck() is not thread-safe. [hadoop]

2024-01-03 Thread via GitHub


Hexiaoqiao merged PR #6029:
URL: https://github.com/apache/hadoop/pull/6029


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18910) ABFS: Adding Support for MD5 Hash based integrity verification of the request content during transport

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802168#comment-17802168
 ] 

ASF GitHub Bot commented on HADOOP-18910:
-

anujmodi2021 commented on code in PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#discussion_r1440430936


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1074,11 +1079,14 @@ public AbfsRestOperation read(final String path,
   ContextEncryptionAdapter contextEncryptionAdapter,
   TracingContext tracingContext) throws AzureBlobFileSystemException {
 final List requestHeaders = createDefaultHeaders();
-addCustomerProvidedKeyHeaders(requestHeaders);
 
 AbfsHttpHeader rangeHeader = new AbfsHttpHeader(RANGE,
 String.format("bytes=%d-%d", position, position + bufferLength - 1));
 requestHeaders.add(rangeHeader);
+addEncryptionKeyRequestHeaders(path, requestHeaders, false,
+contextEncryptionAdapter, tracingContext);
+requestHeaders.add(new AbfsHttpHeader(RANGE,

Review Comment:
   Seems to be outdated.
   Caused by merge conflicts but it was fixed in the latest commit: 
590a003048de696ff12490d87a2d6e6c2553b77d
   
   More merge conflicts need to be resolved now.
   Will take them up.





> ABFS: Adding Support for MD5 Hash based integrity verification of the request 
> content during transport 
> ---
>
> Key: HADOOP-18910
> URL: https://issues.apache.org/jira/browse/HADOOP-18910
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Azure Storage Supports Content-MD5 Request Headers in Both Read and Append 
> APIs.
> Read: [Path - Read - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read]
> Append: [Path - Update - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/update]
> This change is to make client-side changes to support them. In Read request, 
> we will send the appropriate header in response to which server will return 
> the MD5 Hash of the data it sends back. On Client we will tally this with the 
> MD5 hash computed from the data received.
> In Append request, we will compute the MD5 Hash of the data that we are 
> sending to the server and specify that in appropriate header. Server on 
> finding that header will tally this with the MD5 hash it will compute on the 
> data received. 
> This whole Checksum Validation Support is guarded behind a config, Config is 
> by default disabled because with the use of "https" integrity of data is 
> preserved anyways. This is introduced as an additional data integrity check 
> which will have a performance impact as well.
> Users can decide if they want to enable this or not by setting the following 
> config to *"true"* or *"false"* respectively. *Config: 
> "fs.azure.enable.checksum.validation"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18910: [ABFS] Adding Support for MD5 Hash based integrity verification of the request content during transport [hadoop]

2024-01-03 Thread via GitHub


anujmodi2021 commented on code in PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#discussion_r1440430936


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1074,11 +1079,14 @@ public AbfsRestOperation read(final String path,
   ContextEncryptionAdapter contextEncryptionAdapter,
   TracingContext tracingContext) throws AzureBlobFileSystemException {
 final List requestHeaders = createDefaultHeaders();
-addCustomerProvidedKeyHeaders(requestHeaders);
 
 AbfsHttpHeader rangeHeader = new AbfsHttpHeader(RANGE,
 String.format("bytes=%d-%d", position, position + bufferLength - 1));
 requestHeaders.add(rangeHeader);
+addEncryptionKeyRequestHeaders(path, requestHeaders, false,
+contextEncryptionAdapter, tracingContext);
+requestHeaders.add(new AbfsHttpHeader(RANGE,

Review Comment:
   Seems to be outdated.
   Caused by merge conflicts but it was fixed in the latest commit: 
590a003048de696ff12490d87a2d6e6c2553b77d
   
   More merge conflicts need to be resolved now.
   Will take them up.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2024-01-03 Thread via GitHub


anujmodi2021 commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1875338541

   Resolved Merged Conflicts and ran the test suite.
   @steveloughran kindly request you to merge this.
   
   Thanks for all the efforts.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17912) ABFS: Support for Encryption Context

2024-01-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17912:

Fix Version/s: 3.3.9

> ABFS: Support for Encryption Context
> 
>
> Key: HADOOP-17912
> URL: https://issues.apache.org/jira/browse/HADOOP-17912
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Support for customer-provided encryption keys at the file level, superceding 
> the global (account-level) key use in HADOOP-17536.
> ABFS driver will support an "EncryptionContext" plugin for retrieving 
> encryption information, the implementation for which should be provided by 
> the client. The keys/context retrieved will be sent via request headers to 
> the server, which will store the encryption context. Subsequent REST calls to 
> server that access data/user metadata of the file will require fetching the 
> encryption context through a GetFileProperties call and retrieving the key 
> from the custom provider, before sending the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17912) ABFS: Support for Encryption Context

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802156#comment-17802156
 ] 

ASF GitHub Bot commented on HADOOP-17912:
-

steveloughran merged PR #6401:
URL: https://github.com/apache/hadoop/pull/6401




> ABFS: Support for Encryption Context
> 
>
> Key: HADOOP-17912
> URL: https://issues.apache.org/jira/browse/HADOOP-17912
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Support for customer-provided encryption keys at the file level, superceding 
> the global (account-level) key use in HADOOP-17536.
> ABFS driver will support an "EncryptionContext" plugin for retrieving 
> encryption information, the implementation for which should be provided by 
> the client. The keys/context retrieved will be sent via request headers to 
> the server, which will store the encryption context. Subsequent REST calls to 
> server that access data/user metadata of the file will require fetching the 
> encryption context through a GetFileProperties call and retrieving the key 
> from the custom provider, before sending the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-17912. ABFS: Support for Encryption Context (#6221) [hadoop]

2024-01-03 Thread via GitHub


steveloughran merged PR #6401:
URL: https://github.com/apache/hadoop/pull/6401


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18971: [ABFS] Enable Footer Read Optimizations with Appropriate Footer Read Buffer Size [hadoop]

2024-01-03 Thread via GitHub


steveloughran merged PR #6270:
URL: https://github.com/apache/hadoop/pull/6270


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18910) ABFS: Adding Support for MD5 Hash based integrity verification of the request content during transport

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802152#comment-17802152
 ] 

ASF GitHub Bot commented on HADOOP-18910:
-

steveloughran commented on code in PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#discussion_r1440414581


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1074,11 +1079,14 @@ public AbfsRestOperation read(final String path,
   ContextEncryptionAdapter contextEncryptionAdapter,
   TracingContext tracingContext) throws AzureBlobFileSystemException {
 final List requestHeaders = createDefaultHeaders();
-addCustomerProvidedKeyHeaders(requestHeaders);
 
 AbfsHttpHeader rangeHeader = new AbfsHttpHeader(RANGE,
 String.format("bytes=%d-%d", position, position + bufferLength - 1));
 requestHeaders.add(rangeHeader);
+addEncryptionKeyRequestHeaders(path, requestHeaders, false,
+contextEncryptionAdapter, tracingContext);
+requestHeaders.add(new AbfsHttpHeader(RANGE,

Review Comment:
   why is this going in here when like 1085 sets this header too?





> ABFS: Adding Support for MD5 Hash based integrity verification of the request 
> content during transport 
> ---
>
> Key: HADOOP-18910
> URL: https://issues.apache.org/jira/browse/HADOOP-18910
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Azure Storage Supports Content-MD5 Request Headers in Both Read and Append 
> APIs.
> Read: [Path - Read - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read]
> Append: [Path - Update - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/update]
> This change is to make client-side changes to support them. In Read request, 
> we will send the appropriate header in response to which server will return 
> the MD5 Hash of the data it sends back. On Client we will tally this with the 
> MD5 hash computed from the data received.
> In Append request, we will compute the MD5 Hash of the data that we are 
> sending to the server and specify that in appropriate header. Server on 
> finding that header will tally this with the MD5 hash it will compute on the 
> data received. 
> This whole Checksum Validation Support is guarded behind a config, Config is 
> by default disabled because with the use of "https" integrity of data is 
> preserved anyways. This is introduced as an additional data integrity check 
> which will have a performance impact as well.
> Users can decide if they want to enable this or not by setting the following 
> config to *"true"* or *"false"* respectively. *Config: 
> "fs.azure.enable.checksum.validation"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18910: [ABFS] Adding Support for MD5 Hash based integrity verification of the request content during transport [hadoop]

2024-01-03 Thread via GitHub


steveloughran commented on code in PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#discussion_r1440414581


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1074,11 +1079,14 @@ public AbfsRestOperation read(final String path,
   ContextEncryptionAdapter contextEncryptionAdapter,
   TracingContext tracingContext) throws AzureBlobFileSystemException {
 final List requestHeaders = createDefaultHeaders();
-addCustomerProvidedKeyHeaders(requestHeaders);
 
 AbfsHttpHeader rangeHeader = new AbfsHttpHeader(RANGE,
 String.format("bytes=%d-%d", position, position + bufferLength - 1));
 requestHeaders.add(rangeHeader);
+addEncryptionKeyRequestHeaders(path, requestHeaders, false,
+contextEncryptionAdapter, tracingContext);
+requestHeaders.add(new AbfsHttpHeader(RANGE,

Review Comment:
   why is this going in here when like 1085 sets this header too?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17314. Add a metrics to record congestion backoff counts. [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6398:
URL: https://github.com/apache/hadoop/pull/6398#issuecomment-1875296933

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   3m 37s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  41m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  8s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 150 unchanged 
- 0 fixed = 151 total (was 150)  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 214m  8s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 363m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6398 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 95611a8ea58e 5.15.0-86-generic #96-Ubuntu SMP Wed Sep 20 
08:23:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 31ec1b7c027ee948c057c85e2ca6adf24b029a69 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/2/testReport/ |
   | Max. process+thread count | 4014 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[PR] use ignoredNodes [hadoop]

2024-01-03 Thread via GitHub


KeeProMise opened a new pull request, #6403:
URL: https://github.com/apache/hadoop/pull/6403

   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


xuzifu666 commented on code in PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#discussion_r1440364986


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java:
##
@@ -306,7 +306,7 @@ int run(List args) throws IOException {
 new BufferedOutputStream(Files.newOutputStream(srcMeta.toPath()),
 smallBufferSize));
 BlockMetadataHeader.writeHeader(metaOut, checksum);
-metaOut.close();
+metaOut.flush();

Review Comment:
   Yes,I misstaked,keep finally block and add 1 line. @ayushtkn 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


ayushtkn commented on code in PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#discussion_r1440357368


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java:
##
@@ -306,7 +306,7 @@ int run(List args) throws IOException {
 new BufferedOutputStream(Files.newOutputStream(srcMeta.toPath()),
 smallBufferSize));
 BlockMetadataHeader.writeHeader(metaOut, checksum);
-metaOut.close();
+metaOut.flush();

Review Comment:
   I think you got me wrong, only 1 line was required, finally should have 
remained, if there is an exception before close, the stream would remain open 
otherwise



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


xuzifu666 commented on code in PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#discussion_r1440354333


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java:
##
@@ -306,7 +306,7 @@ int run(List args) throws IOException {
 new BufferedOutputStream(Files.newOutputStream(srcMeta.toPath()),
 smallBufferSize));
 BlockMetadataHeader.writeHeader(metaOut, checksum);
-metaOut.close();
+metaOut.flush();

Review Comment:
   Good idea,I had change it @ayushtkn 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


ayushtkn commented on code in PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#discussion_r1440351242


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java:
##
@@ -306,7 +306,7 @@ int run(List args) throws IOException {
 new BufferedOutputStream(Files.newOutputStream(srcMeta.toPath()),
 smallBufferSize));
 BlockMetadataHeader.writeHeader(metaOut, checksum);
-metaOut.close();
+metaOut.flush();

Review Comment:
   I think close itself is ok, we can avoid closing in the finally block if the 
stream is already closing, rather than flushing here & then closing below, 
close would be calling flush before closing itself.
   
   in general I don't think it isn't a big problem, maybe just dereference the 
metaOut after close
   ```
   metaOut.close();
   metaOut = null;
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1875228322

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 15 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  19m 22s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 11s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/21/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 2 new + 8 unchanged - 0 
fixed = 10 total (was 8)  |
   | +1 :green_heart: |  mvnsite  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 51s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  87m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/21/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5881 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 00281c650722 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 36e01ebdf9a01688aa63330c3076ba7428b37933 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/21/testReport/ |
   | Max. process+thread count | 568 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/21/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache 

Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


xuzifu666 commented on PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#issuecomment-1875222042

   @ayushtkn  Here need flush first,had change it


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17300. [SBN READ] Observer should throw ObserverRetryOnActiveException if stateid is always delayed with Active Namenode for a configured time [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6383:
URL: https://github.com/apache/hadoop/pull/6383#issuecomment-1875203426

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 55s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  14m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  8s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   3m 25s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6383/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  35m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  15m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  14m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 19s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6383/4/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 304 unchanged - 0 fixed = 305 total (was 
304)  |
   | +1 :green_heart: |  mvnsite  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 14s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 217m 35s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 458m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6383/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6383 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 370cbac834a0 5.15.0-86-generic #96-Ubuntu SMP Wed Sep 20 
08:23:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1a5600d2f736468b9a66e71eabf9abe969bad7d1 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 

Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6402:
URL: https://github.com/apache/hadoop/pull/6402#issuecomment-1875202329

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   4m 24s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   2m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m 38s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  23m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 183m 55s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 247m  5s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDebugAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6402/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6402 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 00b23c161a19 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bc46c815f01c7a7177e24e60c4eaf0aecb11c9a4 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 

Re: [PR] HDFS-17314. Add a metrics to record congestion backoff counts. [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6398:
URL: https://github.com/apache/hadoop/pull/6398#issuecomment-1875184321

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m 44s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/3/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  20m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 32s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 149 unchanged 
- 0 fixed = 150 total (was 149)  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 187m 38s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 273m 43s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6398/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6398 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b7193fb1152d 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 31ec1b7c027ee948c057c85e2ca6adf24b029a69 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 

[jira] [Assigned] (HADOOP-18656) ABFS: Support for Pagination in Recursive Directory Delete

2024-01-03 Thread Anuj Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuj Modi reassigned HADOOP-18656:
--

Assignee: Anuj Modi  (was: Sree Bhattacharyya)

> ABFS: Support for Pagination in Recursive Directory Delete 
> ---
>
> Key: HADOOP-18656
> URL: https://issues.apache.org/jira/browse/HADOOP-18656
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.5
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2024-01-03 Thread via GitHub


anujmodi2021 commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1875151334

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 596, Failures: 0, Errors: 0, Skipped: 268
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   Time taken: 25 mins 36 secs.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18971) ABFS: Enable Footer Read Optimizations with Appropriate Footer Read Buffer Size

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802097#comment-17802097
 ] 

ASF GitHub Bot commented on HADOOP-18971:
-

anujmodi2021 commented on PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#issuecomment-1875108638

   > > I think not only footer reads but this can be expanded to other 
prefetches as well.
   > > Especially small files that are read fully can be cached such that 
multiple streams can be catered to.
   > 
   > i don't know how common that use is...whereas for spark/tez and workers, 
reopening the same file is not unusual -they just process different parts.
   > 
   > I think this is why prefecting doesn't do anything for orc/parquet. Note 
that impala does cache the column indexes/page indexes so it doesn't need the 
filesystem to secretly do it for them.
   
   Yes, we have also had similar observations. But I feel this cross-stream 
caching is a good idea for both footer reads, and small files read.




> ABFS: Enable Footer Read Optimizations with Appropriate Footer Read Buffer 
> Size
> ---
>
> Key: HADOOP-18971
> URL: https://issues.apache.org/jira/browse/HADOOP-18971
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Footer Read Optimization was introduced to Hadoop azure in this Jira: 
> https://issues.apache.org/jira/browse/HADOOP-17347
> and was kept disabled by default.
> This PR is to enable footer reads by default based on the results of analysis 
> performed as below:
> In our scale workload analysis, it was found that workloads working with 
> Parquet (or for that matter OCR etc.) have a lot of footer reads. Footer 
> reads here refers to the read operations done by workload to get the metadata 
> of the parquet file which is required to understand where the actual data 
> resides in the parquet.
> This whole process takes place in 3 steps:
>  # Workload reads the last 8 bytes of parquet file to get the offset and size 
> of the metadata which is present just above these 8 bytes.
>  # Using that offset, workload reads the metadata to get the exact offset and 
> length of data which it wants to read.
>  # Workload performs the final read operation to get the data it wants to use 
> for its purpose.
> Here the first two steps are metadata reads that can be combined into a 
> single footer read. When workload tries to read certain last few bytes of 
> data (let's say this value is footer size), driver will intelligently read 
> some extra bytes above the footer size to cater to the next read which is 
> going to come.
> Q. What is the footer size of file?
> A: 16KB. Any read request trying to get the data within last 16KB of the file 
> will qualify for whole footer read. This value is enough to cater to all 
> types of files including parquet, OCR, etc.
> Q. What is the buffer size to read when reading the footer?
> A. Let's call this footer read buffer size. Prior to this PR footer read 
> buffer size was same as read buffer size (default 4MB). It was found that for 
> most of the workload required footer size was only 256KB. i.e. For almost all 
> parquet files metadata for that file was found to be within last 256KBs. 
> Keeping this in mind it does not make sense to read whole buffer length of 
> 4MB as a part of footer read. Moreover, reading larger data than require 
> incur additional costs in terms of server and network latencies. Based on 
> this and extensive experimentation it was observed that footer read buffer 
> size of 512KB is ideal for almost all the workloads running on parquet, OCR, 
> etc.
> Following configuration was introduced to configure the footer read buffer 
> size:
> {*}fs.azure.footer.read.request.size{*}: default 512 KB.
> *Quantitative Stats:* For a workload running on parquet files the number of 
> read requests got reduced by 2.3M down from 20M. That means around 10% 
> reduction in overall TPS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18971: [ABFS] Enable Footer Read Optimizations with Appropriate Footer Read Buffer Size [hadoop]

2024-01-03 Thread via GitHub


anujmodi2021 commented on PR #6270:
URL: https://github.com/apache/hadoop/pull/6270#issuecomment-1875108638

   > > I think not only footer reads but this can be expanded to other 
prefetches as well.
   > > Especially small files that are read fully can be cached such that 
multiple streams can be catered to.
   > 
   > i don't know how common that use is...whereas for spark/tez and workers, 
reopening the same file is not unusual -they just process different parts.
   > 
   > I think this is why prefecting doesn't do anything for orc/parquet. Note 
that impala does cache the column indexes/page indexes so it doesn't need the 
filesystem to secretly do it for them.
   
   Yes, we have also had similar observations. But I feel this cross-stream 
caching is a good idea for both footer reads, and small files read.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18910) ABFS: Adding Support for MD5 Hash based integrity verification of the request content during transport

2024-01-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17802096#comment-17802096
 ] 

ASF GitHub Bot commented on HADOOP-18910:
-

anujmodi2021 commented on PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#issuecomment-1875106413

   Thanks for the review @steveloughran 
   If it looks good, please get it merged to trunk




> ABFS: Adding Support for MD5 Hash based integrity verification of the request 
> content during transport 
> ---
>
> Key: HADOOP-18910
> URL: https://issues.apache.org/jira/browse/HADOOP-18910
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Azure Storage Supports Content-MD5 Request Headers in Both Read and Append 
> APIs.
> Read: [Path - Read - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read]
> Append: [Path - Update - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/update]
> This change is to make client-side changes to support them. In Read request, 
> we will send the appropriate header in response to which server will return 
> the MD5 Hash of the data it sends back. On Client we will tally this with the 
> MD5 hash computed from the data received.
> In Append request, we will compute the MD5 Hash of the data that we are 
> sending to the server and specify that in appropriate header. Server on 
> finding that header will tally this with the MD5 hash it will compute on the 
> data received. 
> This whole Checksum Validation Support is guarded behind a config, Config is 
> by default disabled because with the use of "https" integrity of data is 
> preserved anyways. This is introduced as an additional data integrity check 
> which will have a performance impact as well.
> Users can decide if they want to enable this or not by setting the following 
> config to *"true"* or *"false"* respectively. *Config: 
> "fs.azure.enable.checksum.validation"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18910: [ABFS] Adding Support for MD5 Hash based integrity verification of the request content during transport [hadoop]

2024-01-03 Thread via GitHub


anujmodi2021 commented on PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#issuecomment-1875106413

   Thanks for the review @steveloughran 
   If it looks good, please get it merged to trunk


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17309. RBF: Fix Router Safemode check condition error [hadoop]

2024-01-03 Thread via GitHub


hadoop-yetus commented on PR #6390:
URL: https://github.com/apache/hadoop/pull/6390#issuecomment-1875019152

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  9s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6390/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6390 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 94c2ca9f2deb 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cf8713af824e85f12040849a144b46663b742396 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6390/2/testReport/ |
   | Max. process+thread count | 2310 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6390/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: