[jira] [Commented] (HDFS-16892) Fix method name of RPC.Builder#setnumReaders

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684485#comment-17684485
 ] 

ASF GitHub Bot commented on HDFS-16892:
---

hadoop-yetus commented on PR #5301:
URL: https://github.com/apache/hadoop/pull/5301#issuecomment-1418646183

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m  6s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5301/3/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  24m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  21m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 49s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 26s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 58s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5301/3/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 16s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  34m 28s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 283m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5301/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5301 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6267bada236f 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d4f5a5e2eaf156491ff4f6d9e9ca2b09b7195a11 |
   | Default Java | Private 

[jira] [Commented] (HDFS-16905) Provide default hadoop.log.dir for tests

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684461#comment-17684461
 ] 

ASF GitHub Bot commented on HDFS-16905:
---

virajjasani commented on PR #5343:
URL: https://github.com/apache/hadoop/pull/5343#issuecomment-1418618542

   We have this being set in hadoop-project pom 
`${project.build.directory}/log`




> Provide default hadoop.log.dir for tests
> 
>
> Key: HDFS-16905
> URL: https://issues.apache.org/jira/browse/HDFS-16905
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs, hdfs-client
>Affects Versions: 3.4.0, 3.3.5, 3.3.9
> Environment: Tested using the Hadoop development environment Docker 
> image and an IDE on Mac
>Reporter: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> Provide a default directory configuration for test logging



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16905) Provide default hadoop.log.dir for tests

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684456#comment-17684456
 ] 

ASF GitHub Bot commented on HDFS-16905:
---

virajjasani commented on PR #5343:
URL: https://github.com/apache/hadoop/pull/5343#issuecomment-1418615144

   Without this patch, do we not have `hadoop.log.dir` being set to 
`{module}/{sub-module}/target/log` by default?




> Provide default hadoop.log.dir for tests
> 
>
> Key: HDFS-16905
> URL: https://issues.apache.org/jira/browse/HDFS-16905
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: hdfs, hdfs-client
>Affects Versions: 3.4.0, 3.3.5, 3.3.9
> Environment: Tested using the Hadoop development environment Docker 
> image and an IDE on Mac
>Reporter: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> Provide a default directory configuration for test logging



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16898) Make write lock fine-grain in processCommandFromActor method

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684442#comment-17684442
 ] 

ASF GitHub Bot commented on HDFS-16898:
---

hfutatzhanghb commented on PR #5330:
URL: https://github.com/apache/hadoop/pull/5330#issuecomment-1418588427

   > LOG.info("Took {} ms to process {} commands from NN"
   
   @virajjasani , i totally agree with your opinions.  i will modify the code 
laterly.




> Make write lock fine-grain in processCommandFromActor method
> 
>
> Key: HDFS-16898
> URL: https://issues.apache.org/jira/browse/HDFS-16898
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Assignee: ZhangHB
>Priority: Major
>  Labels: pull-request-available
>
> Now in method processCommandFromActor,  we have code like below:
>  
> {code:java}
> writeLock();
> try {
>   if (actor == bpServiceToActive) {
> return processCommandFromActive(cmd, actor);
>   } else {
> return processCommandFromStandby(cmd, actor);
>   }
> } finally {
>   writeUnlock();
> } {code}
> if method processCommandFromActive costs much time, the write lock would not 
> release.
>  
> It maybe block the updateActorStatesFromHeartbeat method in 
> offerService,furthermore, it can cause the lastcontact of datanode very high, 
> even dead when lastcontact beyond 600s.
> {code:java}
> bpos.updateActorStatesFromHeartbeat(
> this, resp.getNameNodeHaState());{code}
> here we can make write lock fine-grain in processCommandFromActor method to 
> address this problem
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16907) Add LastHeartbeatResponseTime for BP service actor

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684419#comment-17684419
 ] 

ASF GitHub Bot commented on HDFS-16907:
---

virajjasani commented on PR #5349:
URL: https://github.com/apache/hadoop/pull/5349#issuecomment-1418518771

   Checkstyle is taken care of in the latest commit. Thanks for the reviews 
@ayushtkn @slfan1989 !!




> Add LastHeartbeatResponseTime for BP service actor
> --
>
> Key: HDFS-16907
> URL: https://issues.apache.org/jira/browse/HDFS-16907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2023-02-03 at 6.12.24 PM.png
>
>
> BP service actor LastHeartbeat is not sufficient to track realtime connection 
> breaks.
> Each BP service actor thread maintains _lastHeartbeatTime_ with the namenode 
> that it is connected to. However, this is updated even if the connection to 
> the namenode is broken.
> Suppose, the actor thread keeps heartbeating to namenode and suddenly the 
> socket connection is broken. When this happens, until specific time duration, 
> the actor thread consistently keeps updating _lastHeartbeatTime_ before even 
> initiating heartbeat connection with namenode. If connection cannot be 
> established even after RPC retries are exhausted, then IOException is thrown. 
> This means that heartbeat response has not been received from the namenode. 
> In the loop, the actor thread keeps trying connecting for heartbeat and the 
> last heartbeat stays close to 1/2s even though in reality there is no 
> response being received from namenode.
>  
> Sample Exception from the BP service actor thread, during which LastHeartbeat 
> stays very low:
> {code:java}
> 2023-02-03 22:34:55,725 WARN  [xyz:9000] datanode.DataNode - IOException in 
> offerService
> java.io.EOFException: End of File Exception between local host is: "dn-0"; 
> destination host is: "nn-1":9000; : java.io.EOFException; For more details 
> see:  http://wiki.apache.org/hadoop/EOFException
>     at sun.reflect.GeneratedConstructorAccessor34.newInstance(Unknown Source)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:862)
>     at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1495)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1392)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>     at com.sun.proxy.$Proxy17.sendHeartbeat(Unknown Source)
>     at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:168)
>     at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:544)
>     at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:682)
>     at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:890)
>     at java.lang.Thread.run(Thread.java:750)
> Caused by: java.io.EOFException
>     at java.io.DataInputStream.readInt(DataInputStream.java:392)
>     at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1884)
>     at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1176)
>     at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1074) {code}
> Attaching screenshots of how last heartbeat value looks when the above error 
> is consistently getting logged.
>  
> Last heartbeat response time is important to initiate any auto-recovery from 
> datanode. Hence, we should introduce LastHeartbeatResponseTime that only gets 
> updated if the BP service actor thread was successfully able to retrieve 
> response from namenode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16903) Fix javadoc of Class LightWeightResizableGSet

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684418#comment-17684418
 ] 

ASF GitHub Bot commented on HDFS-16903:
---

hfutatzhanghb commented on PR #5338:
URL: https://github.com/apache/hadoop/pull/5338#issuecomment-1418507344

   > Merged it. Thanks for your contribution, @hfutatzhanghb.
   
   @tasanuma , thanks a lot~




> Fix javadoc of Class LightWeightResizableGSet
> -
>
> Key: HDFS-16903
> URL: https://issues.apache.org/jira/browse/HDFS-16903
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Assignee: ZhangHB
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> After HDFS-16429 (Add DataSetLockManager to manage fine-grain locks for 
> FsDataSetImpl), the Class LightWeightResizableGSet is thread-safe. So we 
> should fix the docs of it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16882) RBF: Add cache hit rate metric in MountTableResolver#getDestinationForPath

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684416#comment-17684416
 ] 

ASF GitHub Bot commented on HDFS-16882:
---

hadoop-yetus commented on PR #5276:
URL: https://github.com/apache/hadoop/pull/5276#issuecomment-1418499156

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  41m  3s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5276/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 155m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5276/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5276 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 59fb197d1df3 4.15.0-197-generic #208-Ubuntu SMP Tue Nov 1 
17:23:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 684378d7903287d1ad0bbc7adbf5dc783eeb292a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5276/9/testReport/ |
   | Max. process+thread count | 2393 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 

[jira] [Resolved] (HDFS-16903) Fix javadoc of Class LightWeightResizableGSet

2023-02-05 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-16903.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Fix javadoc of Class LightWeightResizableGSet
> -
>
> Key: HDFS-16903
> URL: https://issues.apache.org/jira/browse/HDFS-16903
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Assignee: ZhangHB
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> After HDFS-16429 (Add DataSetLockManager to manage fine-grain locks for 
> FsDataSetImpl), the Class LightWeightResizableGSet is thread-safe. So we 
> should fix the docs of it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16903) Fix javadoc of Class LightWeightResizableGSet

2023-02-05 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma reassigned HDFS-16903:
---

Assignee: ZhangHB

> Fix javadoc of Class LightWeightResizableGSet
> -
>
> Key: HDFS-16903
> URL: https://issues.apache.org/jira/browse/HDFS-16903
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Assignee: ZhangHB
>Priority: Trivial
>  Labels: pull-request-available
>
> After HDFS-16429 (Add DataSetLockManager to manage fine-grain locks for 
> FsDataSetImpl), the Class LightWeightResizableGSet is thread-safe. So we 
> should fix the docs of it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16903) Fix javadoc of Class LightWeightResizableGSet

2023-02-05 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-16903:

Description: After HDFS-16429 (Add DataSetLockManager to manage fine-grain 
locks for FsDataSetImpl), the Class LightWeightResizableGSet is thread-safe. So 
we should fix the docs of it.  (was: After [HDFS-16249. Add DataSetLockManager 
to manage fine-grain locks for FsDataSetImpl.], the Class 
LightWeightResizableGSet is thread-safe. So we should fix the docs of it.)

> Fix javadoc of Class LightWeightResizableGSet
> -
>
> Key: HDFS-16903
> URL: https://issues.apache.org/jira/browse/HDFS-16903
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Priority: Trivial
>  Labels: pull-request-available
>
> After HDFS-16429 (Add DataSetLockManager to manage fine-grain locks for 
> FsDataSetImpl), the Class LightWeightResizableGSet is thread-safe. So we 
> should fix the docs of it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16903) Fix javadoc of Class LightWeightResizableGSet

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684414#comment-17684414
 ] 

ASF GitHub Bot commented on HDFS-16903:
---

tasanuma merged PR #5338:
URL: https://github.com/apache/hadoop/pull/5338




> Fix javadoc of Class LightWeightResizableGSet
> -
>
> Key: HDFS-16903
> URL: https://issues.apache.org/jira/browse/HDFS-16903
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Priority: Trivial
>  Labels: pull-request-available
>
> After [HDFS-16249. Add DataSetLockManager to manage fine-grain locks for 
> FsDataSetImpl.], the Class LightWeightResizableGSet is thread-safe. So we 
> should fix the docs of it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16903) Fix javadoc of Class LightWeightResizableGSet

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684415#comment-17684415
 ] 

ASF GitHub Bot commented on HDFS-16903:
---

tasanuma commented on PR #5338:
URL: https://github.com/apache/hadoop/pull/5338#issuecomment-1418494348

   Merged it. Thanks for your contribution, @hfutatzhanghb.




> Fix javadoc of Class LightWeightResizableGSet
> -
>
> Key: HDFS-16903
> URL: https://issues.apache.org/jira/browse/HDFS-16903
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Priority: Trivial
>  Labels: pull-request-available
>
> After [HDFS-16249. Add DataSetLockManager to manage fine-grain locks for 
> FsDataSetImpl.], the Class LightWeightResizableGSet is thread-safe. So we 
> should fix the docs of it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16898) Make write lock fine-grain in processCommandFromActor method

2023-02-05 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684410#comment-17684410
 ] 

Xiaoqiao He commented on HDFS-16898:


Add ZhangHB to contributors group, and assign this ticket to him.

> Make write lock fine-grain in processCommandFromActor method
> 
>
> Key: HDFS-16898
> URL: https://issues.apache.org/jira/browse/HDFS-16898
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Assignee: ZhangHB
>Priority: Major
>  Labels: pull-request-available
>
> Now in method processCommandFromActor,  we have code like below:
>  
> {code:java}
> writeLock();
> try {
>   if (actor == bpServiceToActive) {
> return processCommandFromActive(cmd, actor);
>   } else {
> return processCommandFromStandby(cmd, actor);
>   }
> } finally {
>   writeUnlock();
> } {code}
> if method processCommandFromActive costs much time, the write lock would not 
> release.
>  
> It maybe block the updateActorStatesFromHeartbeat method in 
> offerService,furthermore, it can cause the lastcontact of datanode very high, 
> even dead when lastcontact beyond 600s.
> {code:java}
> bpos.updateActorStatesFromHeartbeat(
> this, resp.getNameNodeHaState());{code}
> here we can make write lock fine-grain in processCommandFromActor method to 
> address this problem
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16898) Make write lock fine-grain in processCommandFromActor method

2023-02-05 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He reassigned HDFS-16898:
--

Assignee: ZhangHB

> Make write lock fine-grain in processCommandFromActor method
> 
>
> Key: HDFS-16898
> URL: https://issues.apache.org/jira/browse/HDFS-16898
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Assignee: ZhangHB
>Priority: Major
>  Labels: pull-request-available
>
> Now in method processCommandFromActor,  we have code like below:
>  
> {code:java}
> writeLock();
> try {
>   if (actor == bpServiceToActive) {
> return processCommandFromActive(cmd, actor);
>   } else {
> return processCommandFromStandby(cmd, actor);
>   }
> } finally {
>   writeUnlock();
> } {code}
> if method processCommandFromActive costs much time, the write lock would not 
> release.
>  
> It maybe block the updateActorStatesFromHeartbeat method in 
> offerService,furthermore, it can cause the lastcontact of datanode very high, 
> even dead when lastcontact beyond 600s.
> {code:java}
> bpos.updateActorStatesFromHeartbeat(
> this, resp.getNameNodeHaState());{code}
> here we can make write lock fine-grain in processCommandFromActor method to 
> address this problem
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16909) Make judging null statment out from for loop in ReplicaMap#mergeAll method.

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684404#comment-17684404
 ] 

ASF GitHub Bot commented on HDFS-16909:
---

hfutatzhanghb opened a new pull request, #5353:
URL: https://github.com/apache/hadoop/pull/5353

   Currently, the code is as below:
   
   ```java
   for (ReplicaInfo replicaInfo : replicaSet) {
 checkBlock(replicaInfo);
 if (curSet == null) {
   // Add an entry for block pool if it does not exist already
   curSet = new LightWeightResizableGSet<>();
   map.put(bp, curSet);
 }
 curSet.put(replicaInfo);
   } 
   ```
   
   the statment :
   
   ```java
   if(curSet == null)
   ```
   
   should be moved to outside from the for loop.




> Make judging null statment out from for loop in ReplicaMap#mergeAll method.
> ---
>
> Key: HDFS-16909
> URL: https://issues.apache.org/jira/browse/HDFS-16909
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Priority: Minor
>
> Currently, the code is as below:
> {code:java}
> for (ReplicaInfo replicaInfo : replicaSet) {
>   checkBlock(replicaInfo);
>   if (curSet == null) {
> // Add an entry for block pool if it does not exist already
> curSet = new LightWeightResizableGSet<>();
> map.put(bp, curSet);
>   }
>   curSet.put(replicaInfo);
> } {code}
> the statment :
> {code:java}
> if(curSet == null){code}
> should be moved to outside from the for loop.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16909) Make judging null statment out from for loop in ReplicaMap#mergeAll method.

2023-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16909:
--
Labels: pull-request-available  (was: )

> Make judging null statment out from for loop in ReplicaMap#mergeAll method.
> ---
>
> Key: HDFS-16909
> URL: https://issues.apache.org/jira/browse/HDFS-16909
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Priority: Minor
>  Labels: pull-request-available
>
> Currently, the code is as below:
> {code:java}
> for (ReplicaInfo replicaInfo : replicaSet) {
>   checkBlock(replicaInfo);
>   if (curSet == null) {
> // Add an entry for block pool if it does not exist already
> curSet = new LightWeightResizableGSet<>();
> map.put(bp, curSet);
>   }
>   curSet.put(replicaInfo);
> } {code}
> the statment :
> {code:java}
> if(curSet == null){code}
> should be moved to outside from the for loop.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16909) Make judging null statment out from for loop in ReplicaMap#mergeAll method.

2023-02-05 Thread ZhangHB (Jira)
ZhangHB created HDFS-16909:
--

 Summary: Make judging null statment out from for loop in 
ReplicaMap#mergeAll method.
 Key: HDFS-16909
 URL: https://issues.apache.org/jira/browse/HDFS-16909
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.3.4
Reporter: ZhangHB


Currently, the code is as below:
{code:java}
for (ReplicaInfo replicaInfo : replicaSet) {
  checkBlock(replicaInfo);
  if (curSet == null) {
// Add an entry for block pool if it does not exist already
curSet = new LightWeightResizableGSet<>();
map.put(bp, curSet);
  }
  curSet.put(replicaInfo);
} {code}
the statment :
{code:java}
if(curSet == null){code}
should be moved to outside from the for loop.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16898) Make write lock fine-grain in processCommandFromActor method

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684395#comment-17684395
 ] 

ASF GitHub Bot commented on HDFS-16898:
---

zhangshuyan0 commented on PR #5330:
URL: https://github.com/apache/hadoop/pull/5330#issuecomment-1418437375

   It is great to prevent the heartbeat from being affected by command 
processing. I checked that processCommandFromXXX() doesn't access any members 
inside BPOfferService that can be changed. 
   The only thing to note is that in the original code, after the switchover, 
the new ANN can guarantee that the DN will not execute the commands from the 
old ANN as long as it receives two heartbeats from the DN. After the function 
is placed outside the lock, this guarantee no longer exists. However, as 
@hfutatzhanghb  said, NN will set the DataNode to stale after the switchover, 
which means that NN does not rely on this guarantee. So, I think this patch is 
safe.
   




> Make write lock fine-grain in processCommandFromActor method
> 
>
> Key: HDFS-16898
> URL: https://issues.apache.org/jira/browse/HDFS-16898
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Priority: Major
>  Labels: pull-request-available
>
> Now in method processCommandFromActor,  we have code like below:
>  
> {code:java}
> writeLock();
> try {
>   if (actor == bpServiceToActive) {
> return processCommandFromActive(cmd, actor);
>   } else {
> return processCommandFromStandby(cmd, actor);
>   }
> } finally {
>   writeUnlock();
> } {code}
> if method processCommandFromActive costs much time, the write lock would not 
> release.
>  
> It maybe block the updateActorStatesFromHeartbeat method in 
> offerService,furthermore, it can cause the lastcontact of datanode very high, 
> even dead when lastcontact beyond 600s.
> {code:java}
> bpos.updateActorStatesFromHeartbeat(
> this, resp.getNameNodeHaState());{code}
> here we can make write lock fine-grain in processCommandFromActor method to 
> address this problem
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16882) RBF: Add cache hit rate metric in MountTableResolver#getDestinationForPath

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684372#comment-17684372
 ] 

ASF GitHub Bot commented on HDFS-16882:
---

tomscut commented on code in PR #5276:
URL: https://github.com/apache/hadoop/pull/5276#discussion_r1096872081


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java:
##
@@ -699,4 +709,13 @@ public void setDefaultNSEnable(boolean defaultNSRWEnable) {
   public void setDisabled(boolean disable) {
 this.disabled = disable;
   }
+
+

Review Comment:
   Please remove extra blank line.





> RBF: Add cache hit rate metric in MountTableResolver#getDestinationForPath
> --
>
> Key: HDFS-16882
> URL: https://issues.apache.org/jira/browse/HDFS-16882
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Priority: Minor
>  Labels: pull-request-available
> Attachments: locationCache.png
>
>
> Currently, the default value of 
> "dfs.federation.router.mount-table.cache.enable" is true and the default 
> value of "dfs.federation.router.mount-table.max-cache-size" is 1.
> But there is no metric that display cache hit rate, I think we can add a hit 
> rate metric to watch the Cache performance and better tuning the parameters.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16900) Method DataNode#isWrite seems not working in DataTransfer constructor method

2023-02-05 Thread ZhangHB (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684368#comment-17684368
 ] 

ZhangHB commented on HDFS-16900:


So , i think this ISSUE can be closed~. 

> Method DataNode#isWrite seems not working in DataTransfer constructor method
> 
>
> Key: HDFS-16900
> URL: https://issues.apache.org/jira/browse/HDFS-16900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Priority: Major
>
> In constructor method of DataTransfer, there is codes below:
> {code:java}
> if (isTransfer(stage, clientname)) {
>   this.throttler = xserver.getTransferThrottler();
> } else if(isWrite(stage)) {
>   this.throttler = xserver.getWriteThrottler();
> } {code}
> the stage is a parameter of DataTransfer Constructor. Let us see where 
> instantiate DataTransfer object.
> In method transferReplicaForPipelineRecovery, codes like below:
> {code:java}
> final DataTransfer dataTransferTask = new DataTransfer(targets,
> targetStorageTypes, targetStorageIds, b, stage, client); {code}
> but the stage can never be PIPELINE_SETUP_STREAMING_RECOVERY or 
> PIPELINE_SETUP_APPEND_RECOVERY.
> It can only be TRANSFER_RBW or TRANSFER_FINALIZED.  So I think the method 
> isWrite is not working.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16900) Method DataNode#isWrite seems not working in DataTransfer constructor method

2023-02-05 Thread ZhangHB (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684366#comment-17684366
 ] 

ZhangHB commented on HDFS-16900:


Hi, [~elgoiri] .  After reading the codes here, I found the original codes is 
right. There is no need to change the isWrite logic here because when recovery 
progress occures,  createBlockOutputStream will be invoked in 
setupPipelineInternal method. In createBlockOutputStream method, it has code 
below:

 
{code:java}
BlockConstructionStage bcs = recoveryFlag ?
stage.getRecoveryStage() : stage; {code}
 

then passes the bcs to writeBlock method.  in writeBlock method, it will use 
dataXceiverServer.getWriteThrottler().

 

> Method DataNode#isWrite seems not working in DataTransfer constructor method
> 
>
> Key: HDFS-16900
> URL: https://issues.apache.org/jira/browse/HDFS-16900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.3.4
>Reporter: ZhangHB
>Priority: Major
>
> In constructor method of DataTransfer, there is codes below:
> {code:java}
> if (isTransfer(stage, clientname)) {
>   this.throttler = xserver.getTransferThrottler();
> } else if(isWrite(stage)) {
>   this.throttler = xserver.getWriteThrottler();
> } {code}
> the stage is a parameter of DataTransfer Constructor. Let us see where 
> instantiate DataTransfer object.
> In method transferReplicaForPipelineRecovery, codes like below:
> {code:java}
> final DataTransfer dataTransferTask = new DataTransfer(targets,
> targetStorageTypes, targetStorageIds, b, stage, client); {code}
> but the stage can never be PIPELINE_SETUP_STREAMING_RECOVERY or 
> PIPELINE_SETUP_APPEND_RECOVERY.
> It can only be TRANSFER_RBW or TRANSFER_FINALIZED.  So I think the method 
> isWrite is not working.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16892) Fix method name of RPC.Builder#setnumReaders

2023-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684260#comment-17684260
 ] 

ASF GitHub Bot commented on HDFS-16892:
---

hadoop-yetus commented on PR #5301:
URL: https://github.com/apache/hadoop/pull/5301#issuecomment-1417068727

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m  6s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5301/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  24m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  21m 38s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5301/2/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   3m 49s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 59s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5301/2/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 12s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  34m 41s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 284m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5301/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5301 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c609cbcf45fa 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux