[jira] [Work logged] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?focusedWorklogId=778864&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778864
 ]

ASF GitHub Bot logged work on HDFS-16624:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 06:39
Start Date: 07/Jun/22 06:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4412:
URL: https://github.com/apache/hadoop/pull/4412#issuecomment-1148257091

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 45s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 252m 18s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 13s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 375m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4412/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4412 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 25c503b8420f 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 13422ba2a0310ad0a1545646c05f2a30fa27db4b |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4412/1/testReport/ |
   | Max. process+thread count | 3270 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4412/1/console |
   | version

[jira] [Work logged] (HDFS-16619) impove HttpHeaders.Values And HttpHeaders.Names With recommended Class

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?focusedWorklogId=778833&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778833
 ]

ASF GitHub Bot logged work on HDFS-16619:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 04:58
Start Date: 07/Jun/22 04:58
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4406:
URL: https://github.com/apache/hadoop/pull/4406#issuecomment-1148196108

   @virajjasani @tomscut Please help me review the code, this part is to 
replace the deprecated import. There should be no code risk and will not cause 
junit to fail.




Issue Time Tracking
---

Worklog Id: (was: 778833)
Time Spent: 40m  (was: 0.5h)

> impove HttpHeaders.Values And HttpHeaders.Names With recommended Class
> --
>
> Key: HDFS-16619
> URL: https://issues.apache.org/jira/browse/HDFS-16619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
> HttpHeaderValues ​​and HttpHeaderNames instead.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?focusedWorklogId=778832&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778832
 ]

ASF GitHub Bot logged work on HDFS-16624:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 04:45
Start Date: 07/Jun/22 04:45
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4412:
URL: https://github.com/apache/hadoop/pull/4412#issuecomment-1148190628

   > That's surprising indeed. One of the recent PRs, I see this test passing:
   > 
   > 1. 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4402/1/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testAllDatanodesReconfig/
   > 2. 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4402/2/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testAllDatanodesReconfig/
   
   But you can look at the screenshot of my local debugger(in jira 
[HDFS-16624](https://issues.apache.org/jira/browse/HDFS-16624)), the index is 
indeed inaccurate.




Issue Time Tracking
---

Worklog Id: (was: 778832)
Time Spent: 1h  (was: 50m)

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Attachments: testAllDatanodesReconfig.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(2) of the assertion(1208), index should be equal to 1.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?focusedWorklogId=778831&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778831
 ]

ASF GitHub Bot logged work on HDFS-16624:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 04:40
Start Date: 07/Jun/22 04:40
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on PR #4412:
URL: https://github.com/apache/hadoop/pull/4412#issuecomment-1148188221

   That's surprising indeed. One of the recent PRs, I see this test passing:
   1. 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4402/1/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testAllDatanodesReconfig/
   2. 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4402/2/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testAllDatanodesReconfig/
   




Issue Time Tracking
---

Worklog Id: (was: 778831)
Time Spent: 50m  (was: 40m)

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Attachments: testAllDatanodesReconfig.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(2) of the assertion(1208), index should be equal to 1.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16563) Namenode WebUI prints sensitve information on Token Expiry

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16563?focusedWorklogId=778827&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778827
 ]

ASF GitHub Bot logged work on HDFS-16563:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 04:07
Start Date: 07/Jun/22 04:07
Worklog Time Spent: 10m 
  Work Description: prasad-acit commented on PR #4241:
URL: https://github.com/apache/hadoop/pull/4241#issuecomment-1148171582

   Thanks @steveloughran 




Issue Time Tracking
---

Worklog Id: (was: 778827)
Time Spent: 3h  (was: 2h 50m)

> Namenode WebUI prints sensitve information on Token Expiry
> --
>
> Key: HDFS-16563
> URL: https://issues.apache.org/jira/browse/HDFS-16563
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode, security, webhdfs
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-04-27-23-01-16-033.png, 
> image-2022-04-27-23-28-40-568.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Login to Namenode WebUI.
> Wait for token to expire. (Or modify the Token refresh time 
> dfs.namenode.delegation.token.renew/update-interval to lower value)
> Refresh the WebUI after the Token expiry.
> Full token information gets printed in WebUI.
>  
> !image-2022-04-27-23-01-16-033.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-16613) EC: Improve performance of decommissioning dn with many ec blocks

2022-06-06 Thread caozhiqiang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17550782#comment-17550782
 ] 

caozhiqiang edited comment on HDFS-16613 at 6/7/22 3:46 AM:


In my cluster tests, the following optimizations would maximize the IO 
performance of the decommissioning DN. And the time spend by decommissioning a 
DN reduced from 3 hours to half an hour.
 # Add this patch
 # Increase the value of dfs.namenode.replication.max-streams-hard-limit
 # Decrease the value of dfs.namenode.reconstruction.pending.timeout-sec to 
shorten the time interval for checking pendingReconstructions.

!image-2022-06-07-11-46-42-389.png|width=552,height=165!


was (Author: caozhiqiang):
In my cluster tests, the following optimizations would maximize the IO 
performance of the decommissioning DN. And the time spend by decommissioning a 
DN reduced from 3 hours to half an hour.
 # Add this patch
 # Increase the value of dfs.namenode.replication.max-streams-hard-limit
 # Decrease the value of dfs.namenode.reconstruction.pending.timeout-sec to 
shorten the time interval for checking pendingReconstructions.

> EC: Improve performance of decommissioning dn with many ec blocks
> -
>
> Key: HDFS-16613
> URL: https://issues.apache.org/jira/browse/HDFS-16613
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec, erasure-coding, namenode
>Affects Versions: 3.4.0
>Reporter: caozhiqiang
>Assignee: caozhiqiang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-06-07-11-46-42-389.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In a hdfs cluster with a lot of EC blocks, decommission a dn is very slow. 
> The reason is unlike replication blocks can be replicated from any dn which 
> has the same block replication, the ec block have to be replicated from the 
> decommissioning dn.
> The configurations dfs.namenode.replication.max-streams and 
> dfs.namenode.replication.max-streams-hard-limit will limit the replication 
> speed, but increase these configurations will create risk to the whole 
> cluster's network. So it should add a new configuration to limit the 
> decommissioning dn, distinguished from the cluster wide max-streams limit.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16613) EC: Improve performance of decommissioning dn with many ec blocks

2022-06-06 Thread caozhiqiang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17550782#comment-17550782
 ] 

caozhiqiang commented on HDFS-16613:


In my cluster tests, the following optimizations would maximize the IO 
performance of the decommissioning DN. And the time spend by decommissioning a 
DN reduced from 3 hours to half an hour.
 # Add this patch
 # Increase the value of dfs.namenode.replication.max-streams-hard-limit
 # Decrease the value of dfs.namenode.reconstruction.pending.timeout-sec to 
shorten the time interval for checking pendingReconstructions.

> EC: Improve performance of decommissioning dn with many ec blocks
> -
>
> Key: HDFS-16613
> URL: https://issues.apache.org/jira/browse/HDFS-16613
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec, erasure-coding, namenode
>Affects Versions: 3.4.0
>Reporter: caozhiqiang
>Assignee: caozhiqiang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In a hdfs cluster with a lot of EC blocks, decommission a dn is very slow. 
> The reason is unlike replication blocks can be replicated from any dn which 
> has the same block replication, the ec block have to be replicated from the 
> decommissioning dn.
> The configurations dfs.namenode.replication.max-streams and 
> dfs.namenode.replication.max-streams-hard-limit will limit the replication 
> speed, but increase these configurations will create risk to the whole 
> cluster's network. So it should add a new configuration to limit the 
> decommissioning dn, distinguished from the cluster wide max-streams limit.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16613) EC: Improve performance of decommissioning dn with many ec blocks

2022-06-06 Thread Hiroyuki Adachi (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17550779#comment-17550779
 ] 

Hiroyuki Adachi commented on HDFS-16613:


Hi [~caozhiqiang] ,

Using dfs.namenode.replication.max-streams-hard-limit is simple, but in my 
understanding, it makes the decommissioning node busy, and most of the EC 
blocks will not be replicated but reconstructed (see HDFS-14768). Since 
reconstruction is expensive, HDFS-8786 makes using replication for EC blocks on 
decommissioning node. Some people may prefer this.
What do you think of this?

> EC: Improve performance of decommissioning dn with many ec blocks
> -
>
> Key: HDFS-16613
> URL: https://issues.apache.org/jira/browse/HDFS-16613
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec, erasure-coding, namenode
>Affects Versions: 3.4.0
>Reporter: caozhiqiang
>Assignee: caozhiqiang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In a hdfs cluster with a lot of EC blocks, decommission a dn is very slow. 
> The reason is unlike replication blocks can be replicated from any dn which 
> has the same block replication, the ec block have to be replicated from the 
> decommissioning dn.
> The configurations dfs.namenode.replication.max-streams and 
> dfs.namenode.replication.max-streams-hard-limit will limit the replication 
> speed, but increase these configurations will create risk to the whole 
> cluster's network. So it should add a new configuration to limit the 
> decommissioning dn, distinguished from the cluster wide max-streams limit.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?focusedWorklogId=778820&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778820
 ]

ASF GitHub Bot logged work on HDFS-16624:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 02:47
Start Date: 07/Jun/22 02:47
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4412:
URL: https://github.com/apache/hadoop/pull/4412#issuecomment-1148133845

   > Thanks for the PR @slfan1989 Is this test consistently failing or is it 
more of a flaky one?
   
   Hi @virajjasani , 
   This error looks like a consistent failing, I can reproduce it by running 
multiple times locally, and 2 tests report as failure in 
https://github.com/apache/hadoop/pull/4406.
   
   
[report1](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/1/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testAllDatanodesReconfig/)
   
   
[reprot2](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/2/testReport/org.apache.hadoop.hdfs.tools/TestDFSAdmin/testAllDatanodesReconfig/)
   




Issue Time Tracking
---

Worklog Id: (was: 778820)
Time Spent: 40m  (was: 0.5h)

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Attachments: testAllDatanodesReconfig.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(2) of the assertion(1208), index should be equal to 1.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?focusedWorklogId=778819&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778819
 ]

ASF GitHub Bot logged work on HDFS-16624:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 02:40
Start Date: 07/Jun/22 02:40
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on PR #4412:
URL: https://github.com/apache/hadoop/pull/4412#issuecomment-1148130883

   Thanks for the PR @slfan1989 
   Is this test consistently failing or is it more of a flaky one?




Issue Time Tracking
---

Worklog Id: (was: 778819)
Time Spent: 0.5h  (was: 20m)

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Attachments: testAllDatanodesReconfig.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(2) of the assertion(1208), index should be equal to 1.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16557) BootstrapStandby failed because of checking gap for inprogress EditLogInputStream

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16557?focusedWorklogId=778816&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778816
 ]

ASF GitHub Bot logged work on HDFS-16557:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 02:20
Start Date: 07/Jun/22 02:20
Worklog Time Spent: 10m 
  Work Description: tomscut commented on PR #4219:
URL: https://github.com/apache/hadoop/pull/4219#issuecomment-1148120968

   Hi @jojochuang @tasanuma @Hexiaoqiao , could you please also take a look. 
Thanks.




Issue Time Tracking
---

Worklog Id: (was: 778816)
Time Spent: 2h 40m  (was: 2.5h)

> BootstrapStandby failed because of checking gap for inprogress 
> EditLogInputStream
> -
>
> Key: HDFS-16557
> URL: https://issues.apache.org/jira/browse/HDFS-16557
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-04-22-17-17-14-577.png, 
> image-2022-04-22-17-17-14-618.png, image-2022-04-22-17-17-23-113.png, 
> image-2022-04-22-17-17-32-487.png
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The lastTxId of an inprogress EditLogInputStream lastTxId isn't necessarily 
> HdfsServerConstants.INVALID_TXID. We can determine its status directly by 
> EditLogInputStream#isInProgress.
> We introduced [SBN READ], and set 
> {color:#ff}{{dfs.ha.tail-edits.in-progress=true}}{color}. Then 
> bootstrapStandby, the EditLogInputStream of inProgress is misjudged, 
> resulting in a gap check failure, which causes bootstrapStandby to fail.
> hdfs namenode -bootstrapStandby
> !image-2022-04-22-17-17-32-487.png|width=766,height=161!
> !image-2022-04-22-17-17-14-577.png|width=598,height=187!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16563) Namenode WebUI prints sensitve information on Token Expiry

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun resolved HDFS-16563.
--
Resolution: Resolved

> Namenode WebUI prints sensitve information on Token Expiry
> --
>
> Key: HDFS-16563
> URL: https://issues.apache.org/jira/browse/HDFS-16563
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namanode, security, webhdfs
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-04-27-23-01-16-033.png, 
> image-2022-04-27-23-28-40-568.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Login to Namenode WebUI.
> Wait for token to expire. (Or modify the Token refresh time 
> dfs.namenode.delegation.token.renew/update-interval to lower value)
> Refresh the WebUI after the Token expiry.
> Full token information gets printed in WebUI.
>  
> !image-2022-04-27-23-01-16-033.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16609) Fix Flakes Junit Tests that often report timeouts

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16609?focusedWorklogId=778815&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778815
 ]

ASF GitHub Bot logged work on HDFS-16609:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 02:07
Start Date: 07/Jun/22 02:07
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on code in PR #4382:
URL: https://github.com/apache/hadoop/pull/4382#discussion_r890702393


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java:
##
@@ -282,7 +282,7 @@ public void testServerDefaultsWithMinimalCaching() throws 
Exception  {
   // do nothing;
   return false;
 }
-  }, 1, 3000);
+  }, 1, 6000);

Review Comment:
   Thanks for your help review, I will fix it.





Issue Time Tracking
---

Worklog Id: (was: 778815)
Time Spent: 1h  (was: 50m)

> Fix Flakes Junit Tests that often report timeouts
> -
>
> Key: HDFS-16609
> URL: https://issues.apache.org/jira/browse/HDFS-16609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
> found that one type of problem is TimeOut problem, these problems can be 
> avoided by adjusting TimeOut time.
> The modified method is as follows:
> 1.org.apache.hadoop.hdfs.TestFileCreation#testServerDefaultsWithMinimalCaching
> {code:java}
> [ERROR] 
> testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation) 
>  Time elapsed: 7.136 s  <<< ERROR!
> java.util.concurrent.TimeoutException: 
> Timed out waiting for condition. 
> Thread diagnostics: 
> [WARNING] 
> org.apache.hadoop.hdfs.TestFileCreation.testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation)
> [ERROR]   Run 1: TestFileCreation.testServerDefaultsWithMinimalCaching:277 
> Timeout Timed out ...
> [INFO]   Run 2: PASS{code}
> 2.org.apache.hadoop.hdfs.TestDFSShell#testFilePermissions
> {code:java}
> [ERROR] testFilePermissions(org.apache.hadoop.hdfs.TestDFSShell)  Time 
> elapsed: 30.022 s  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 3 
> milliseconds
>   at java.lang.Thread.dumpThreads(Native Method)
>   at java.lang.Thread.getStackTrace(Thread.java:1549)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout.createTimeoutException(FailOnTimeout.java:182)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout.getResult(FailOnTimeout.java:177)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:128)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> [WARNING] 
> org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(org.apache.hadoop.hdfs.TestDFSShell)
> [ERROR]   Run 1: TestDFSShell.testFilePermissions TestTimedOut test timed out 
> after 3 mil...
> [INFO]   Run 2: PASS {code}
> 3.org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier#testSPSWhenFileHasExcessRedundancyBlocks
> {code:java}
> [ERROR] 
> testSPSWhenFileHasExcessRedundancyBlocks(org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier)
>   Time elapsed: 67.904 s  <<< ERROR!
> java.util.concurrent.TimeoutException: 
> Timed out waiting for condition. 
> [WARNING] 
> org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks(org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier)
> [ERROR]   Run 1: 
> TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks:1379
>  Timeout
> [ERROR]   Run 2: 
> TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks:1379
>  Timeout
> [INFO]   Run 3: PASS {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16609) Fix Flakes Junit Tests that often report timeouts

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16609?focusedWorklogId=778811&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778811
 ]

ASF GitHub Bot logged work on HDFS-16609:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 01:32
Start Date: 07/Jun/22 01:32
Worklog Time Spent: 10m 
  Work Description: tomscut commented on code in PR #4382:
URL: https://github.com/apache/hadoop/pull/4382#discussion_r890688840


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java:
##
@@ -282,7 +282,7 @@ public void testServerDefaultsWithMinimalCaching() throws 
Exception  {
   // do nothing;
   return false;
 }
-  }, 1, 3000);
+  }, 1, 6000);

Review Comment:
   The comment `Wait for 3 seconds` above also needs to be changed.





Issue Time Tracking
---

Worklog Id: (was: 778811)
Time Spent: 50m  (was: 40m)

> Fix Flakes Junit Tests that often report timeouts
> -
>
> Key: HDFS-16609
> URL: https://issues.apache.org/jira/browse/HDFS-16609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
> found that one type of problem is TimeOut problem, these problems can be 
> avoided by adjusting TimeOut time.
> The modified method is as follows:
> 1.org.apache.hadoop.hdfs.TestFileCreation#testServerDefaultsWithMinimalCaching
> {code:java}
> [ERROR] 
> testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation) 
>  Time elapsed: 7.136 s  <<< ERROR!
> java.util.concurrent.TimeoutException: 
> Timed out waiting for condition. 
> Thread diagnostics: 
> [WARNING] 
> org.apache.hadoop.hdfs.TestFileCreation.testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation)
> [ERROR]   Run 1: TestFileCreation.testServerDefaultsWithMinimalCaching:277 
> Timeout Timed out ...
> [INFO]   Run 2: PASS{code}
> 2.org.apache.hadoop.hdfs.TestDFSShell#testFilePermissions
> {code:java}
> [ERROR] testFilePermissions(org.apache.hadoop.hdfs.TestDFSShell)  Time 
> elapsed: 30.022 s  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 3 
> milliseconds
>   at java.lang.Thread.dumpThreads(Native Method)
>   at java.lang.Thread.getStackTrace(Thread.java:1549)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout.createTimeoutException(FailOnTimeout.java:182)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout.getResult(FailOnTimeout.java:177)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:128)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> [WARNING] 
> org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(org.apache.hadoop.hdfs.TestDFSShell)
> [ERROR]   Run 1: TestDFSShell.testFilePermissions TestTimedOut test timed out 
> after 3 mil...
> [INFO]   Run 2: PASS {code}
> 3.org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier#testSPSWhenFileHasExcessRedundancyBlocks
> {code:java}
> [ERROR] 
> testSPSWhenFileHasExcessRedundancyBlocks(org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier)
>   Time elapsed: 67.904 s  <<< ERROR!
> java.util.concurrent.TimeoutException: 
> Timed out waiting for condition. 
> [WARNING] 
> org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks(org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier)
> [ERROR]   Run 1: 
> TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks:1379
>  Timeout
> [ERROR]   Run 2: 
> TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks:1379
>  Timeout
> [INFO]   Run 3: PASS {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16625) Unit tests aren't checking for PMDK availability

2022-06-06 Thread Ashutosh Gupta (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17550730#comment-17550730
 ] 

Ashutosh Gupta commented on HDFS-16625:
---

Thanks for pointing the issue [~svaughan] . Can you also add the list/scope of 
tests that need to updated. Thanks.

> Unit tests aren't checking for PMDK availability
> 
>
> Key: HDFS-16625
> URL: https://issues.apache.org/jira/browse/HDFS-16625
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.4.0, 3.3.4
>Reporter: Steve Vaughan
>Priority: Blocker
>
> There are unit tests that require native PMDK libraries which aren't checking 
> if the library is available, resulting in unsuccessful test.  Adding the 
> following in the test setup addresses the problem.
> {code:java}
> assumeTrue ("Requires PMDK", NativeIO.POSIX.isPmdkAvailable()); {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16609) Fix Flakes Junit Tests that often report timeouts

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16609?focusedWorklogId=778801&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778801
 ]

ASF GitHub Bot logged work on HDFS-16609:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 00:35
Start Date: 07/Jun/22 00:35
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4382:
URL: https://github.com/apache/hadoop/pull/4382#issuecomment-1148069751

   @tomscut please help me to review the code, thank you very much!




Issue Time Tracking
---

Worklog Id: (was: 778801)
Time Spent: 40m  (was: 0.5h)

> Fix Flakes Junit Tests that often report timeouts
> -
>
> Key: HDFS-16609
> URL: https://issues.apache.org/jira/browse/HDFS-16609
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors, I 
> found that one type of problem is TimeOut problem, these problems can be 
> avoided by adjusting TimeOut time.
> The modified method is as follows:
> 1.org.apache.hadoop.hdfs.TestFileCreation#testServerDefaultsWithMinimalCaching
> {code:java}
> [ERROR] 
> testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation) 
>  Time elapsed: 7.136 s  <<< ERROR!
> java.util.concurrent.TimeoutException: 
> Timed out waiting for condition. 
> Thread diagnostics: 
> [WARNING] 
> org.apache.hadoop.hdfs.TestFileCreation.testServerDefaultsWithMinimalCaching(org.apache.hadoop.hdfs.TestFileCreation)
> [ERROR]   Run 1: TestFileCreation.testServerDefaultsWithMinimalCaching:277 
> Timeout Timed out ...
> [INFO]   Run 2: PASS{code}
> 2.org.apache.hadoop.hdfs.TestDFSShell#testFilePermissions
> {code:java}
> [ERROR] testFilePermissions(org.apache.hadoop.hdfs.TestDFSShell)  Time 
> elapsed: 30.022 s  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 3 
> milliseconds
>   at java.lang.Thread.dumpThreads(Native Method)
>   at java.lang.Thread.getStackTrace(Thread.java:1549)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout.createTimeoutException(FailOnTimeout.java:182)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout.getResult(FailOnTimeout.java:177)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:128)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> [WARNING] 
> org.apache.hadoop.hdfs.TestDFSShell.testFilePermissions(org.apache.hadoop.hdfs.TestDFSShell)
> [ERROR]   Run 1: TestDFSShell.testFilePermissions TestTimedOut test timed out 
> after 3 mil...
> [INFO]   Run 2: PASS {code}
> 3.org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier#testSPSWhenFileHasExcessRedundancyBlocks
> {code:java}
> [ERROR] 
> testSPSWhenFileHasExcessRedundancyBlocks(org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier)
>   Time elapsed: 67.904 s  <<< ERROR!
> java.util.concurrent.TimeoutException: 
> Timed out waiting for condition. 
> [WARNING] 
> org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks(org.apache.hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier)
> [ERROR]   Run 1: 
> TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks:1379
>  Timeout
> [ERROR]   Run 2: 
> TestExternalStoragePolicySatisfier.testSPSWhenFileHasExcessRedundancyBlocks:1379
>  Timeout
> [INFO]   Run 3: PASS {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?focusedWorklogId=778796&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778796
 ]

ASF GitHub Bot logged work on HDFS-16624:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 00:30
Start Date: 07/Jun/22 00:30
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4412:
URL: https://github.com/apache/hadoop/pull/4412#issuecomment-1148067099

   @virajjasani @tomscut  please help to review the code.




Issue Time Tracking
---

Worklog Id: (was: 778796)
Time Spent: 20m  (was: 10m)

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Attachments: testAllDatanodesReconfig.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(2) of the assertion(1208), index should be equal to 1.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16625) Unit tests aren't checking for PMDK availability

2022-06-06 Thread Steve Vaughan (Jira)
Steve Vaughan created HDFS-16625:


 Summary: Unit tests aren't checking for PMDK availability
 Key: HDFS-16625
 URL: https://issues.apache.org/jira/browse/HDFS-16625
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Affects Versions: 3.4.0, 3.3.4
Reporter: Steve Vaughan


There are unit tests that require native PMDK libraries which aren't checking 
if the library is available, resulting in unsuccessful test.  Adding the 
following in the test setup addresses the problem.
{code:java}
assumeTrue ("Requires PMDK", NativeIO.POSIX.isPmdkAvailable()); {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16624:
--
Labels: pull-request-available  (was: )

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Attachments: testAllDatanodesReconfig.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(2) of the assertion(1208), index should be equal to 1.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?focusedWorklogId=778795&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778795
 ]

ASF GitHub Bot logged work on HDFS-16624:
-

Author: ASF GitHub Bot
Created on: 07/Jun/22 00:22
Start Date: 07/Jun/22 00:22
Worklog Time Spent: 10m 
  Work Description: slfan1989 opened a new pull request, #4412:
URL: https://github.com/apache/hadoop/pull/4412

   JIRA:HDFS-16624. Fix 
org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR.
   
   [HDFS-16619](https://issues.apache.org/jira/browse/HDFS-16619) found an 
error message during Junit unit testing, as follows:
   
   ```
   expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
was:<[ From: "false"]>
   
   ```
   
   After code debugging, it was found that there was an error in the selection 
outs.get(2) of the assertion(1208), index should be equal to 1.
   
   Please see jira for details.




Issue Time Tracking
---

Worklog Id: (was: 778795)
Remaining Estimate: 0h
Time Spent: 10m

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Attachments: testAllDatanodesReconfig.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(2) of the assertion(1208), index should be equal to 1.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16624:
-
Description: 
HDFS-16619 found an error message during Junit unit testing, as follows:

expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
was:<[ From: "false"]>

After code debugging, it was found that there was an error in the selection 
outs.get(2) of the assertion(1208), index should be equal to 1.

Please refer to the attachment for debugging pictures

!testAllDatanodesReconfig.png!

  was:
HDFS-16619 found an error message during Junit unit testing, as follows:

expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
was:<[ From: "false"]>

After code debugging, it was found that there was an error in the selection 
outs.get(x) of the assertion.

Please refer to the attachment for debugging pictures

!testAllDatanodesReconfig.png!


> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Attachments: testAllDatanodesReconfig.png
>
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(2) of the assertion(1208), index should be equal to 1.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16624:
-
Description: 
HDFS-16619 found an error message during Junit unit testing, as follows:

expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
was:<[ From: "false"]>

After code debugging, it was found that there was an error in the selection 
outs.get(x) of the assertion.

Please refer to the attachment for debugging pictures

!testAllDatanodesReconfig.png!

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Attachments: testAllDatanodesReconfig.png
>
>
> HDFS-16619 found an error message during Junit unit testing, as follows:
> expected:<[SUCCESS: Changed property dfs.datanode.peer.stats.enabled]> but 
> was:<[ From: "false"]>
> After code debugging, it was found that there was an error in the selection 
> outs.get(x) of the assertion.
> Please refer to the attachment for debugging pictures
> !testAllDatanodesReconfig.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16624:
-
Summary: Fix 
org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR  (was: 
Fix 
org.apache.hadoop.hdfs.tools.TestDFSAdmin#org.apache.hadoop.hdfs.tools.TestDFSAdmin
 ERROR)

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Attachments: testAllDatanodesReconfig.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16624:
-
Attachment: testAllDatanodesReconfig.png

> Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig ERROR
> 
>
> Key: HDFS-16624
> URL: https://issues.apache.org/jira/browse/HDFS-16624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
> Attachments: testAllDatanodesReconfig.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16624) Fix org.apache.hadoop.hdfs.tools.TestDFSAdmin#org.apache.hadoop.hdfs.tools.TestDFSAdmin ERROR

2022-06-06 Thread fanshilun (Jira)
fanshilun created HDFS-16624:


 Summary: Fix 
org.apache.hadoop.hdfs.tools.TestDFSAdmin#org.apache.hadoop.hdfs.tools.TestDFSAdmin
 ERROR
 Key: HDFS-16624
 URL: https://issues.apache.org/jira/browse/HDFS-16624
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.4.0
Reporter: fanshilun
Assignee: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16595) Slow peer metrics - add median, mad and upper latency limits

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16595?focusedWorklogId=778781&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778781
 ]

ASF GitHub Bot logged work on HDFS-16595:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 22:41
Start Date: 06/Jun/22 22:41
Worklog Time Spent: 10m 
  Work Description: jojochuang merged PR #4405:
URL: https://github.com/apache/hadoop/pull/4405




Issue Time Tracking
---

Worklog Id: (was: 778781)
Time Spent: 4h  (was: 3h 50m)

> Slow peer metrics - add median, mad and upper latency limits
> 
>
> Key: HDFS-16595
> URL: https://issues.apache.org/jira/browse/HDFS-16595
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Slow datanode metrics include slow node and it's reporting node details. With 
> HDFS-16582, we added the aggregate latency that is perceived by the reporting 
> nodes.
> In order to get more insights into how the outlier slownode's latencies 
> differ from the rest of the nodes, we should also expose median, median 
> absolute deviation and the calculated upper latency limit details.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16595) Slow peer metrics - add median, mad and upper latency limits

2022-06-06 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-16595:
---
Fix Version/s: 3.3.4

> Slow peer metrics - add median, mad and upper latency limits
> 
>
> Key: HDFS-16595
> URL: https://issues.apache.org/jira/browse/HDFS-16595
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Slow datanode metrics include slow node and it's reporting node details. With 
> HDFS-16582, we added the aggregate latency that is perceived by the reporting 
> nodes.
> In order to get more insights into how the outlier slownode's latencies 
> differ from the rest of the nodes, we should also expose median, median 
> absolute deviation and the calculated upper latency limit details.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16595) Slow peer metrics - add median, mad and upper latency limits

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16595?focusedWorklogId=778779&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778779
 ]

ASF GitHub Bot logged work on HDFS-16595:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 22:30
Start Date: 06/Jun/22 22:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4405:
URL: https://github.com/apache/hadoop/pull/4405#issuecomment-1147998733

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  6s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 24s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   3m 59s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   2m 44s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   6m 20s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  27m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   6m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 18s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 219m 11s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4405/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 352m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4405/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4405 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux d8161cf91b19 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 35f87e3afa1b311d282cbc600ca3fe298093bcc6 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4405/2/testReport/ |
   | Max. process+thread count | 2194 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4405/2/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This m

[jira] [Work logged] (HDFS-16619) impove HttpHeaders.Values And HttpHeaders.Names With recommended Class

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?focusedWorklogId=778775&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778775
 ]

ASF GitHub Bot logged work on HDFS-16619:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 22:08
Start Date: 06/Jun/22 22:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4406:
URL: https://github.com/apache/hadoop/pull/4406#issuecomment-1147982542

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 
911 unchanged - 26 fixed = 911 total (was 937)  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new 
+ 890 unchanged - 26 fixed = 890 total (was 916)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 244m 52s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 355m 45s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4406 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ad57c51bf068 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8e8536d8c2702b

[jira] [Work logged] (HDFS-16064) HDFS-721 causes DataNode decommissioning to get stuck indefinitely

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16064?focusedWorklogId=778764&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778764
 ]

ASF GitHub Bot logged work on HDFS-16064:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 21:21
Start Date: 06/Jun/22 21:21
Worklog Time Spent: 10m 
  Work Description: KevinWikant commented on PR #4410:
URL: https://github.com/apache/hadoop/pull/4410#issuecomment-1147944328

   The failing unit test is:
   
   ```
   Failed junit tests  |  hadoop.hdfs.tools.TestDFSAdmin
   ```
   
   This seems to be unrelated to my change & a flaky unit test:
   
   ```
   [ERROR] testAllDatanodesReconfig(org.apache.hadoop.hdfs.tools.TestDFSAdmin)  
Time elapsed: 5.489 s  <<< FAILURE!
   org.junit.ComparisonFailure: expected:<[SUCCESS: Changed property 
dfs.datanode.peer.stats.enabled]> but was:<[  From: "false"]>
   at org.junit.Assert.assertEquals(Assert.java:117)
   at org.junit.Assert.assertEquals(Assert.java:146)
   at 
org.apache.hadoop.hdfs.tools.TestDFSAdmin.testAllDatanodesReconfig(TestDFSAdmin.java:1208)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:498)
   at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
   at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
   at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
   at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
   at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
   at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
   at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
   at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
   at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
   at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
   at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
   at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
   at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
   at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
   ```




Issue Time Tracking
---

Worklog Id: (was: 778764)
Time Spent: 0.5h  (was: 20m)

> HDFS-721 causes DataNode decommissioning to get stuck indefinitely
> --
>
> Key: HDFS-16064
> URL: https://issues.apache.org/jira/browse/HDFS-16064
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.2.1
>Reporter: Kevin Wikant
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Seems that https://issues.apache.org/jira/browse/HDFS-721 was resolved as a 
> non-issue under the assumption that if the namenode & a datanode get into an 
> inconsistent state for a given block pipeline, there should be another 
> datanode available to replicate the block to
> While testing datanode decommissioning using "dfs.exclude.hosts", I have 
> encountered a scenario where the decommissioning gets stuck indefinitely
> Below is the progression of events:
>  * there are initially 4 datanodes DN1, DN2, DN3, DN4
>  * scale-down is started by adding DN1 & DN

[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=778763&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778763
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 21:15
Start Date: 06/Jun/22 21:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#issuecomment-1147939325

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  |  19m  3s |  |  Docker failed to build run-specific 
yetus/hadoop:tp-20308}.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/4370 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/10/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 778763)
Time Spent: 4h 10m  (was: 4h)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=778758&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778758
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 20:54
Start Date: 06/Jun/22 20:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#issuecomment-1147917193

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  44m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  67m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 104m 15s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 246m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4370 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell detsecrets golang |
   | uname | Linux a0be7fdfdda0 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 33292facf5f8c7692e65bad6b4b65a7093c23fe7 |
   | Default Java | Red Hat, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/10/testReport/ |
   | Max. process+thread count | 601 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4370/10/console |
   | versions | git=2.9.5 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 778758)
Time Spent: 4h  (was: 3h 50m)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Work logged] (HDFS-16064) HDFS-721 causes DataNode decommissioning to get stuck indefinitely

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16064?focusedWorklogId=778752&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778752
 ]

ASF GitHub Bot logged work on HDFS-16064:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 20:36
Start Date: 06/Jun/22 20:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4410:
URL: https://github.com/apache/hadoop/pull/4410#issuecomment-1147898690

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 100 unchanged 
- 0 fixed = 103 total (was 100)  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 250m 34s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 372m 10s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4410 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux bda194b52c15 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 73dd8a92c19cfd5989dcd0ca61a6f5dfea3d0a97 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | M

[jira] [Commented] (HDFS-16064) HDFS-721 causes DataNode decommissioning to get stuck indefinitely

2022-06-06 Thread Kevin Wikant (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17550619#comment-17550619
 ] 

Kevin Wikant commented on HDFS-16064:
-

Thanks [~it_singer] , you are correct in that my initial root cause was very 
much incorrect

In the past few months I have seen this issue re-occur multiple times, I 
decided to do a deeper dive & I identified the bug described here: 
[https://github.com/apache/hadoop/pull/4410]

I think the issue described in this ticket is occurring because the corrupt 
replica on DN3 will not be invalidated until DN3 either:
 * restarts & sends a block report
 * sends its next periodic block report (default interval is 6 hours)

So in the worst case the decommissioning in the aforementioned scenario will 
take up to 6 hours to complete because DN3 may take up to 6 hours to send its 
next block report & have the corrupt replica invalidated. I have not targeted 
fixing this decommissioning blocker scenario because it is arguably expected 
behavior & will resolve in at most "dfs.blockreport.intervalMsec". Instead the 
fix [[https://github.com/apache/hadoop/pull/4410]] is targeting a more severe 
bug where decommissioning gets blocked indefinitely

> HDFS-721 causes DataNode decommissioning to get stuck indefinitely
> --
>
> Key: HDFS-16064
> URL: https://issues.apache.org/jira/browse/HDFS-16064
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.2.1
>Reporter: Kevin Wikant
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Seems that https://issues.apache.org/jira/browse/HDFS-721 was resolved as a 
> non-issue under the assumption that if the namenode & a datanode get into an 
> inconsistent state for a given block pipeline, there should be another 
> datanode available to replicate the block to
> While testing datanode decommissioning using "dfs.exclude.hosts", I have 
> encountered a scenario where the decommissioning gets stuck indefinitely
> Below is the progression of events:
>  * there are initially 4 datanodes DN1, DN2, DN3, DN4
>  * scale-down is started by adding DN1 & DN2 to "dfs.exclude.hosts"
>  * HDFS block pipelines on DN1 & DN2 must now be replicated to DN3 & DN4 in 
> order to satisfy their minimum replication factor of 2
>  * during this replication process 
> https://issues.apache.org/jira/browse/HDFS-721 is encountered which causes 
> the following inconsistent state:
>  ** DN3 thinks it has the block pipeline in FINALIZED state
>  ** the namenode does not think DN3 has the block pipeline
> {code:java}
> 2021-06-06 10:38:23,604 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
> (DataXceiver for client  at /DN2:45654 [Receiving block BP-YYY:blk_XXX]): 
> DN3:9866:DataXceiver error processing WRITE_BLOCK operation  src: /DN2:45654 
> dst: /DN3:9866; 
> org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block 
> BP-YYY:blk_XXX already exists in state FINALIZED and thus cannot be created.
> {code}
>  * the replication is attempted again, but:
>  ** DN4 has the block
>  ** DN1 and/or DN2 have the block, but don't count towards the minimum 
> replication factor because they are being decommissioned
>  ** DN3 does not have the block & cannot have the block replicated to it 
> because of HDFS-721
>  * the namenode repeatedly tries to replicate the block to DN3 & repeatedly 
> fails, this continues indefinitely
>  * therefore DN4 is the only live datanode with the block & the minimum 
> replication factor of 2 cannot be satisfied
>  * because the minimum replication factor cannot be satisfied for the 
> block(s) being moved off DN1 & DN2, the datanode decommissioning can never be 
> completed 
> {code:java}
> 2021-06-06 10:39:10,106 INFO BlockStateChange (DatanodeAdminMonitor-0): 
> Block: blk_XXX, Expected Replicas: 2, live replicas: 1, corrupt replicas: 0, 
> decommissioned replicas: 0, decommissioning replicas: 2, maintenance 
> replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is 
> Open File: false, Datanodes having this block: DN1:9866 DN2:9866 DN4:9866 , 
> Current Datanode: DN1:9866, Is current datanode decommissioning: true, Is 
> current datanode entering maintenance: false
> ...
> 2021-06-06 10:57:10,105 INFO BlockStateChange (DatanodeAdminMonitor-0): 
> Block: blk_XXX, Expected Replicas: 2, live replicas: 1, corrupt replicas: 0, 
> decommissioned replicas: 0, decommissioning replicas: 2, maintenance 
> replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is 
> Open File: false, Datanodes having this block: DN1:9866 DN2:9866 DN4:9866 , 
> Current Datanode: DN2:9866, Is current datanode decommissioning: true, Is 
> current datano

[jira] [Work logged] (HDFS-16623) IllegalArgumentException in LifelineSender

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16623?focusedWorklogId=778725&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778725
 ]

ASF GitHub Bot logged work on HDFS-16623:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 18:42
Start Date: 06/Jun/22 18:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4409:
URL: https://github.com/apache/hadoop/pull/4409#issuecomment-1147767240

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 246m  1s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 13s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 356m  0s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4409/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4409 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 81481f6ff660 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b21f7354ea3697413259bfae677b81f41e68b1c3 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4409/1/testReport/ |
   | Max. process+thread count | 3917 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/ha

[jira] [Work started] (HDFS-16463) Make dirent cross platform compatible

2022-06-06 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16463 started by Gautham Banasandra.
-
> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=778696&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778696
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 16:47
Start Date: 06/Jun/22 16:47
Worklog Time Spent: 10m 
  Work Description: goiri commented on code in PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#discussion_r890340126


##
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/x-platform/c-api/dirent.cc:
##
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "x-platform/c-api/dirent.h"
+#include "x-platform/dirent.h"
+
+#if defined(WIN32) && defined(__cplusplus)

Review Comment:
   I'm OK with the extern, what looks weird is to use if/endif to wrap it.
   Isn't there a cleaner way where we have the core in a header file and then 
we have the extern in some other and we do include or not accordingly?
   ```
   extern "C" {
   #include 
   }
   ```
   





Issue Time Tracking
---

Worklog Id: (was: 778696)
Time Spent: 3h 50m  (was: 3h 40m)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=778694&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778694
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 16:46
Start Date: 06/Jun/22 16:46
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on code in PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#discussion_r890339355


##
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/x-platform/c-api/dirent.cc:
##
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "x-platform/c-api/dirent.h"
+#include "x-platform/dirent.h"
+
+#if defined(WIN32) && defined(__cplusplus)
+extern "C" {
+#endif
+
+DIR *opendir(const char *dir_path) {
+  const auto dir = new DIR;
+  dir->x_platform_dirent_ptr = new XPlatform::Dirent(dir_path);
+  return dir;
+}
+
+struct dirent *readdir(DIR *dir) {
+  /*
+   * We will use a static variable to hold the dirent, so that we align with 
the
+   * readdir's implementation in dirent.h header file in Linux.
+   */
+  static struct dirent static_dir_entry;
+
+  // Get the XPlatform::Dirent instance and move the iterator.
+  const auto x_platform_dirent =
+  static_cast(dir->x_platform_dirent_ptr);
+  const auto dir_entry = x_platform_dirent->NextFile();
+
+  // End of iteration.
+  if (std::holds_alternative(dir_entry)) {
+return nullptr;
+  }
+
+  // Error in iteration.
+  if (std::holds_alternative(dir_entry)) {
+const auto err = std::get(dir_entry);
+errno = err.value();
+
+#ifdef X_PLATFORM_C_API_DIRENT_DEBUG
+std::cerr << "Error in listing directory: " << err.message() << std::endl;

Review Comment:
   Yeah, I think it'll be useful for someone trying to debug any related 
issues. The same approach has been used already - 
https://github.com/apache/hadoop/blob/a234d00c1ce57427202d4c9587f891ec0164d10c/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L396-L398





Issue Time Tracking
---

Worklog Id: (was: 778694)
Time Spent: 3h 40m  (was: 3.5h)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16576) Remove unused Imports in Hadoop HDFS project

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16576?focusedWorklogId=778693&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778693
 ]

ASF GitHub Bot logged work on HDFS-16576:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 16:45
Start Date: 06/Jun/22 16:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4389:
URL: https://github.com/apache/hadoop/pull/4389#issuecomment-1147657748

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 32 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 28s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m 16s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   6m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  34m 56s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  10m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m  0s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   6m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   5m 45s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 22s |  |  hadoop-hdfs-project: The 
patch generated 0 new + 506 unchanged - 63 fixed = 506 total (was 569)  |
   | +1 :green_heart: |  mvnsite  |   4m 33s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  1s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4389/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   4m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   9m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 37s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 251m 53s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   6m 36s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 22s |  |  hadoop-hdfs-nfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  22m 14s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 482m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4389/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4389 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux

[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=778690&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778690
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 16:44
Start Date: 06/Jun/22 16:44
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on code in PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#discussion_r890332363


##
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/x-platform/c-api/dirent.cc:
##
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "x-platform/c-api/dirent.h"
+#include "x-platform/dirent.h"
+
+#if defined(WIN32) && defined(__cplusplus)

Review Comment:
   The C++ compiler will mangle the method signature. Since these APIs are 
invoked in 
[jni_helper.c](https://github.com/apache/hadoop/blob/a234d00c1ce57427202d4c9587f891ec0164d10c/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c)
 (which is compiled by C compiler), the C linker won't be able to find the 
mangled version of these methods. Thus, we need to wrap the implementation with 
an `extern "C"` block to prevent the C++ compiler from mangling these APIs. It 
results in an `Undefined reference error` otherwise.
   
   This is the standard approach whenever C code tries to invoke C++ code.





Issue Time Tracking
---

Worklog Id: (was: 778690)
Time Spent: 3.5h  (was: 3h 20m)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=778689&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778689
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 16:43
Start Date: 06/Jun/22 16:43
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on code in PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#discussion_r890332363


##
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/x-platform/c-api/dirent.cc:
##
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "x-platform/c-api/dirent.h"
+#include "x-platform/dirent.h"
+
+#if defined(WIN32) && defined(__cplusplus)

Review Comment:
   The C++ compiler will mangle the method signature. Since these APIs are 
invoked in 
[jni_helper.c](https://github.com/apache/hadoop/blob/a234d00c1ce57427202d4c9587f891ec0164d10c/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.h)
 (which is compiled by C compiler), the C linker won't be able to find the 
mangled version of these methods. Thus, we need to wrap the implementation with 
an `extern "C"` block to prevent the C++ compiler from mangling these APIs. It 
results in an `Undefined reference error` otherwise.
   
   This is the standard approach whenever C code tries to invoke C++ code.





Issue Time Tracking
---

Worklog Id: (was: 778689)
Time Spent: 3h 20m  (was: 3h 10m)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=778688&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778688
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 16:41
Start Date: 06/Jun/22 16:41
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on code in PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#discussion_r890335244


##
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/x-platform/c-api/dirent.h:
##
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef NATIVE_LIBHDFSPP_LIB_CROSS_PLATFORM_C_API_DIRENT_H
+#define NATIVE_LIBHDFSPP_LIB_CROSS_PLATFORM_C_API_DIRENT_H
+
+/*
+ * We will use XPlatform's dirent on Windows or when the macro
+ * USE_X_PLATFORM_DIRENT is defined.
+ */
+#if defined(WIN32) || defined(USE_X_PLATFORM_DIRENT)
+
+/*
+ * We will use extern "C" only on Windows.
+ */
+#if defined(WIN32) && defined(__cplusplus)
+extern "C" {
+#endif
+
+/**
+ * DIR struct holds the pointer to XPlatform::Dirent instance. Since this will
+ * be used in C, we can't hold the pointer to XPlatform::Dirent. We're working
+ * around this by using a void pointer and casting it to XPlatform::Dirent when
+ * needed in C++.
+ */
+typedef struct DIR {
+  void *x_platform_dirent_ptr;
+} DIR;
+
+/**
+ * dirent struct contains the name of the file/folder while iterating through
+ * the directory's children.
+ */
+struct dirent {
+  char d_name[256];
+};
+
+/**
+ * Opens a directory for iteration. Internally, it instantiates DIR struct for
+ * the given path. closedir must be called on the returned pointer to DIR 
struct
+ * when done.
+ *
+ * @param dir_path The path to the directory to iterate through.
+ * @return A pointer to the DIR struct.
+ */
+DIR *opendir(const char *dir_path);
+
+/**
+ * For iterating through the children of the directory pointed to by the DIR
+ * struct pointer.
+ *
+ * @param dir The pointer to the DIR struct.
+ * @return A pointer to dirent struct containing the name of the current child
+ * file/folder.
+ */
+struct dirent *readdir(DIR *dir);
+
+/**
+ * De-allocates the XPlatform::Dirent instance pointed to by the DIR pointer.
+ *
+ * @param dir The pointer to DIR struct to close.
+ * @return 0 if successful.
+ */
+int closedir(DIR *dir);
+
+#if defined(WIN32) && defined(__cplusplus)
+}
+#endif
+
+#else
+/*
+ * For non-Windows environments, we use the dirent.h header itself.
+ */
+#include 

Review Comment:
   Done.





Issue Time Tracking
---

Worklog Id: (was: 778688)
Time Spent: 3h 10m  (was: 3h)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16463) Make dirent cross platform compatible

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16463?focusedWorklogId=778686&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778686
 ]

ASF GitHub Bot logged work on HDFS-16463:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 16:38
Start Date: 06/Jun/22 16:38
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on code in PR #4370:
URL: https://github.com/apache/hadoop/pull/4370#discussion_r890332363


##
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/x-platform/c-api/dirent.cc:
##
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "x-platform/c-api/dirent.h"
+#include "x-platform/dirent.h"
+
+#if defined(WIN32) && defined(__cplusplus)

Review Comment:
   The C++ compiler will mangle the method signature. Since these APIs are 
invoked in jnihelper.c (which is compiled by C compiler), the C linker won't be 
able to find the mangled version of these methods. Thus, we need to wrap the 
implementation with an `extern "C"` block to prevent the C++ compiler from 
mangling these APIs. It results in an `Undefined reference error` otherwise.
   
   This is the standard approach whenever C code tries to invoke C++ code.





Issue Time Tracking
---

Worklog Id: (was: 778686)
Time Spent: 3h  (was: 2h 50m)

> Make dirent cross platform compatible
> -
>
> Key: HDFS-16463
> URL: https://issues.apache.org/jira/browse/HDFS-16463
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> [jnihelper.c|https://github.com/apache/hadoop/blob/1fed18bb2d8ac3dbaecc3feddded30bed918d556/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c#L28]
>  in HDFS native client uses *dirent.h*. This header file isn't available on 
> Windows. Thus, we need to replace this with a cross platform compatible 
> implementation for dirent.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16611 stopped by fanshilun.

> impove TestSeveralNameNodes#testCircularLinkedListWrites Params
> ---
>
> Key: HDFS-16611
> URL: https://issues.apache.org/jira/browse/HDFS-16611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  
> I found that the following error messages often appear
> org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#
> testCircularLinkedListWrites
> This method runs very close to success. It can be found that the current item 
> is approximately equal to the target length in 3 runs. I think it can reduce 
> the length of LIST_LENGTH and prolong the RUNTIME time, which can effectively 
> increase the success rate of this Test.
> Reducing LIST_LENGTH does not change the running purpose of Test, and it can 
> also test Circular Writes in the case of NN failover.
>  * 1st run
> {code:java}
> 1st run
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 114.252 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 43
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 42
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 2st run
> {code:java}
>  [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 110.349 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 50
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 49
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 49
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 3rd run
> {code:java}
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 109.364 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 46
>done: false
> ] expected:<0> but was:<3>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16611?focusedWorklogId=778683&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778683
 ]

ASF GitHub Bot logged work on HDFS-16611:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 16:13
Start Date: 06/Jun/22 16:13
Worklog Time Spent: 10m 
  Work Description: slfan1989 closed pull request #4387: HDFS-16611. impove 
TestSeveralNameNodes#testCircularLinkedListWrites Params
URL: https://github.com/apache/hadoop/pull/4387




Issue Time Tracking
---

Worklog Id: (was: 778683)
Time Spent: 0.5h  (was: 20m)

> impove TestSeveralNameNodes#testCircularLinkedListWrites Params
> ---
>
> Key: HDFS-16611
> URL: https://issues.apache.org/jira/browse/HDFS-16611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  
> I found that the following error messages often appear
> org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#
> testCircularLinkedListWrites
> This method runs very close to success. It can be found that the current item 
> is approximately equal to the target length in 3 runs. I think it can reduce 
> the length of LIST_LENGTH and prolong the RUNTIME time, which can effectively 
> increase the success rate of this Test.
> Reducing LIST_LENGTH does not change the running purpose of Test, and it can 
> also test Circular Writes in the case of NN failover.
>  * 1st run
> {code:java}
> 1st run
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 114.252 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 43
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 42
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 2st run
> {code:java}
>  [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 110.349 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 50
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 49
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 49
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 3rd run
> {code:java}
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 109.364 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 46
>done: false
> ] expected:<0> but was:<3>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16611) impove TestSeveralNameNodes#testCircularLinkedListWrites Params

2022-06-06 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun resolved HDFS-16611.
--
Resolution: Won't Fix

> impove TestSeveralNameNodes#testCircularLinkedListWrites Params
> ---
>
> Key: HDFS-16611
> URL: https://issues.apache.org/jira/browse/HDFS-16611
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When I was dealing with HDFS-16590 JIRA, Junit Tests often reported errors,  
> I found that the following error messages often appear
> org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes#
> testCircularLinkedListWrites
> This method runs very close to success. It can be found that the current item 
> is approximately equal to the target length in 3 runs. I think it can reduce 
> the length of LIST_LENGTH and prolong the RUNTIME time, which can effectively 
> increase the success rate of this Test.
> Reducing LIST_LENGTH does not change the running purpose of Test, and it can 
> also test Circular Writes in the case of NN failover.
>  * 1st run
> {code:java}
> 1st run
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 114.252 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 43
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 42
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 2st run
> {code:java}
>  [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 110.349 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 50
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 49
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 49
>done: false
> ] expected:<0> but was:<3>
> {code}
>  * 3rd run
> {code:java}
> [ERROR] 
> testCircularLinkedListWrites(org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes)
>   Time elapsed: 109.364 s  <<< FAILURE!
> java.lang.AssertionError: 
> Some writers didn't complete in expected runtime! Current writer 
> state:[Circular Writer:
>directory: /test-0
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-1
>target length: 50
>current item: 47
>done: false
> , Circular Writer:
>directory: /test-2
>target length: 50
>current item: 46
>done: false
> ] expected:<0> but was:<3>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16605) Improve Code With Lambda in hadoop-hdfs-rbf moudle

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16605?focusedWorklogId=778678&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778678
 ]

ASF GitHub Bot logged work on HDFS-16605:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 16:05
Start Date: 06/Jun/22 16:05
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4375:
URL: https://github.com/apache/hadoop/pull/4375#issuecomment-1147618712

   @goiri Please help me review the code again, thank you very much!




Issue Time Tracking
---

Worklog Id: (was: 778678)
Time Spent: 1h 20m  (was: 1h 10m)

> Improve Code With Lambda in hadoop-hdfs-rbf moudle
> --
>
> Key: HDFS-16605
> URL: https://issues.apache.org/jira/browse/HDFS-16605
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16064) HDFS-721 causes DataNode decommissioning to get stuck indefinitely

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16064:
--
Labels: pull-request-available  (was: )

> HDFS-721 causes DataNode decommissioning to get stuck indefinitely
> --
>
> Key: HDFS-16064
> URL: https://issues.apache.org/jira/browse/HDFS-16064
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.2.1
>Reporter: Kevin Wikant
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Seems that https://issues.apache.org/jira/browse/HDFS-721 was resolved as a 
> non-issue under the assumption that if the namenode & a datanode get into an 
> inconsistent state for a given block pipeline, there should be another 
> datanode available to replicate the block to
> While testing datanode decommissioning using "dfs.exclude.hosts", I have 
> encountered a scenario where the decommissioning gets stuck indefinitely
> Below is the progression of events:
>  * there are initially 4 datanodes DN1, DN2, DN3, DN4
>  * scale-down is started by adding DN1 & DN2 to "dfs.exclude.hosts"
>  * HDFS block pipelines on DN1 & DN2 must now be replicated to DN3 & DN4 in 
> order to satisfy their minimum replication factor of 2
>  * during this replication process 
> https://issues.apache.org/jira/browse/HDFS-721 is encountered which causes 
> the following inconsistent state:
>  ** DN3 thinks it has the block pipeline in FINALIZED state
>  ** the namenode does not think DN3 has the block pipeline
> {code:java}
> 2021-06-06 10:38:23,604 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
> (DataXceiver for client  at /DN2:45654 [Receiving block BP-YYY:blk_XXX]): 
> DN3:9866:DataXceiver error processing WRITE_BLOCK operation  src: /DN2:45654 
> dst: /DN3:9866; 
> org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block 
> BP-YYY:blk_XXX already exists in state FINALIZED and thus cannot be created.
> {code}
>  * the replication is attempted again, but:
>  ** DN4 has the block
>  ** DN1 and/or DN2 have the block, but don't count towards the minimum 
> replication factor because they are being decommissioned
>  ** DN3 does not have the block & cannot have the block replicated to it 
> because of HDFS-721
>  * the namenode repeatedly tries to replicate the block to DN3 & repeatedly 
> fails, this continues indefinitely
>  * therefore DN4 is the only live datanode with the block & the minimum 
> replication factor of 2 cannot be satisfied
>  * because the minimum replication factor cannot be satisfied for the 
> block(s) being moved off DN1 & DN2, the datanode decommissioning can never be 
> completed 
> {code:java}
> 2021-06-06 10:39:10,106 INFO BlockStateChange (DatanodeAdminMonitor-0): 
> Block: blk_XXX, Expected Replicas: 2, live replicas: 1, corrupt replicas: 0, 
> decommissioned replicas: 0, decommissioning replicas: 2, maintenance 
> replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is 
> Open File: false, Datanodes having this block: DN1:9866 DN2:9866 DN4:9866 , 
> Current Datanode: DN1:9866, Is current datanode decommissioning: true, Is 
> current datanode entering maintenance: false
> ...
> 2021-06-06 10:57:10,105 INFO BlockStateChange (DatanodeAdminMonitor-0): 
> Block: blk_XXX, Expected Replicas: 2, live replicas: 1, corrupt replicas: 0, 
> decommissioned replicas: 0, decommissioning replicas: 2, maintenance 
> replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is 
> Open File: false, Datanodes having this block: DN1:9866 DN2:9866 DN4:9866 , 
> Current Datanode: DN2:9866, Is current datanode decommissioning: true, Is 
> current datanode entering maintenance: false
> {code}
> Being stuck in decommissioning state forever is not an intended behavior of 
> DataNode decommissioning
> A few potential solutions:
>  * Address the root cause of the problem which is an inconsistent state 
> between namenode & datanode: https://issues.apache.org/jira/browse/HDFS-721
>  * Detect when datanode decommissioning is stuck due to lack of available 
> datanodes for satisfying the minimum replication factor, then recover by 
> re-enabling the datanodes being decommissioned
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16064) HDFS-721 causes DataNode decommissioning to get stuck indefinitely

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16064?focusedWorklogId=778639&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778639
 ]

ASF GitHub Bot logged work on HDFS-16064:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 14:22
Start Date: 06/Jun/22 14:22
Worklog Time Spent: 10m 
  Work Description: KevinWikant opened a new pull request, #4410:
URL: https://github.com/apache/hadoop/pull/4410

   HDFS-16064. Determine when to invalidate corrupt replicas based on number of 
usable replicas
   
   ### Description of PR
   
   Bug fix for a re-occurring HDFS bug which can result in datanodes being 
unable to complete decommissioning indefinitely. In short, the bug is a chicken 
& egg problem where:
   - in order for a datanode to be decommissioned its blocks must be 
sufficiently replicated
   - datanode cannot sufficiently replicate some block(s) because of corrupt 
block replicas on target datanodes
   - corrupt block replicas will not be invalidated because the block(s) are 
not sufficiently replicated
   
   In this scenario, the block(s) are sufficiently replicated but the logic the 
Namenode uses to determine if a block is sufficiently replicated is flawed.
   
   To understand the bug further we must first establish some background 
information.
   
    Background Information
   
   Givens:
   - FSDataOutputStream is being used to write the HDFS file, under the hood 
this uses a class DataStreamer
   - for the sake of example we will say the HDFS file has a replication factor 
of 2, though this is not a requirement to reproduce the issue
   - the file is being appended to intermittently over an extended period of 
time (in general, this issue needs minutes/hours  to reproduce)
   - HDFS is configured with typical default configurations
   
   Under certain scenarios the DataStreamer client can detect a bad link when 
trying to append to the block pipeline, in this case the DataStreamer client 
can shift the block pipeline by replacing the bad link with a new datanode. 
When this happens the replica on the datanode that was shifted away from 
becomes corrupted because it no longer has the latest generation stamp for the 
block. As a more concrete example:
   - DataStreamer client creates block pipeline on datanodes A & B, each have a 
block replica with generation stamp 1
   - DataStreamer client tries to append the block pipeline by sending block 
transfer (with generation stamp 2) to datanode A
   - Datanode A succeeds in writing the block transfer & then attempts to 
forward the transfer to datanode B
   - Datanode B fails the transfer for some reason and responds with a 
PipelineAck failure code
   - Datanode A sends a PipelineAck to DataStreamer indicating datanode A 
succeeded in the append & datanode B failed in the append. The DataStreamer 
detects datanode B as a bad link which will be replaced before the next append 
operation
   - at this point datanode A has live replica with generation stamp 2 & 
datanode B has corrupt replica with generation stamp 1
   - the next time DataStreamer tries to append the block it will call Namenode 
"getAdditionalDatanode" API which returns some other datanode C
   - DataStreamer sends data transfer (with generation stamp 3) to the new 
block pipeline containing datanodes A & C, the append succeeds to both datanodes
   - end state is that:
 - datanodes A & C have live replicas with latest generation stamp 3
 - datanode B has a corrupt replica because its lagging behind with 
generation stamp 1
   
   The key behavior being highlighted here is that when the DataStreamer client 
shifts the block pipeline due to append failures on a subset of the datanodes 
in the pipeline, a corrupt block replica gets leftover on the datanode that was 
shifted away from.
   
   This corrupt block replica makes the datanode ineligible as a replication 
target for the block because of the following exception:
   
   ```
   2021-06-06 10:38:23,604 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
(DataXceiver for client  at /DN2:45654 [Receiving block BP-YYY:blk_XXX]): 
DN3:9866:DataXceiver error processing WRITE_BLOCK operation  src: /DN2:45654 
dst: /DN3:9866; 
org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block 
BP-YYY:blk_XXX already exists in state FINALIZED and thus cannot be created.
   ```
   
   What typically will occur is that these corrupt block replicas will be 
invalidated by the Namenode which will cause the corrupt replica to the be 
deleted on the datanode, thus allowing the datanode to once again be a 
replication target for the block. Note that the Namenode will not identify the 
corrupt block replica until the datanode sends its next block report, this can 
take up to 6 hours with the default block report interval.
   
   As long as there is 1 live replica of the block, all the corrupt replicas 
sh

[jira] [Work logged] (HDFS-16623) IllegalArgumentException in LifelineSender

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16623?focusedWorklogId=778600&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778600
 ]

ASF GitHub Bot logged work on HDFS-16623:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 12:45
Start Date: 06/Jun/22 12:45
Worklog Time Spent: 10m 
  Work Description: ZanderXu opened a new pull request, #4409:
URL: https://github.com/apache/hadoop/pull/4409

   Jira:  [HDFS-16623](https://issues.apache.org/jira/browse/HDFS-16623), fix 
bug to avoid IllegalArgumentException in LifelineSender.
   
   In our production environment, an IllegalArgumentException occurred in the 
LifelineSender at one DataNode which was undergoing GC at that time.
   
   




Issue Time Tracking
---

Worklog Id: (was: 778600)
Remaining Estimate: 0h
Time Spent: 10m

> IllegalArgumentException in LifelineSender
> --
>
> Key: HDFS-16623
> URL: https://issues.apache.org/jira/browse/HDFS-16623
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In our production environment, an IllegalArgumentException occurred in the 
> LifelineSender at one DataNode which was undergoing GC at that time. 
> And the bug code is at line 1060 in BPServiceActor.java, because the sleep 
> time is negative.
> {code:java}
> while (shouldRun()) {
>  try {
> if (lifelineNamenode == null) {
>   lifelineNamenode = dn.connectToLifelineNN(lifelineNnAddr);
> }
> sendLifelineIfDue();
> Thread.sleep(scheduler.getLifelineWaitTime());
>   } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
>   } catch (IOException e) {
> LOG.warn("IOException in LifelineSender for " + BPServiceActor.this, 
> e);
>  }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16623) IllegalArgumentException in LifelineSender

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16623:
--
Labels: pull-request-available  (was: )

> IllegalArgumentException in LifelineSender
> --
>
> Key: HDFS-16623
> URL: https://issues.apache.org/jira/browse/HDFS-16623
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In our production environment, an IllegalArgumentException occurred in the 
> LifelineSender at one DataNode which was undergoing GC at that time. 
> And the bug code is at line 1060 in BPServiceActor.java, because the sleep 
> time is negative.
> {code:java}
> while (shouldRun()) {
>  try {
> if (lifelineNamenode == null) {
>   lifelineNamenode = dn.connectToLifelineNN(lifelineNnAddr);
> }
> sendLifelineIfDue();
> Thread.sleep(scheduler.getLifelineWaitTime());
>   } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
>   } catch (IOException e) {
> LOG.warn("IOException in LifelineSender for " + BPServiceActor.this, 
> e);
>  }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16623) IllegalArgumentException in LifelineSender

2022-06-06 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-16623:

Description: 
In our production environment, an IllegalArgumentException occurred in the 
LifelineSender at one DataNode which was undergoing GC at that time. 
And the bug code is at line 1060 in BPServiceActor.java, because the sleep time 
is negative.

{code:java}
while (shouldRun()) {
 try {
if (lifelineNamenode == null) {
  lifelineNamenode = dn.connectToLifelineNN(lifelineNnAddr);
}
sendLifelineIfDue();
Thread.sleep(scheduler.getLifelineWaitTime());
  } catch (InterruptedException e) {
Thread.currentThread().interrupt();
  } catch (IOException e) {
LOG.warn("IOException in LifelineSender for " + BPServiceActor.this, e);
 }
}
{code}


  was:
In our production environment, an IllegalArgumentException occurred in the 
LifelineSender at one DataNode which was undergoing GC at that time. 
And the bug code is at line 1060 in BPServiceActor.java, because the sleep time 
is negative.

{code:java}
 while (shouldRun()) {
try {
  if (lifelineNamenode == null) {
lifelineNamenode = dn.connectToLifelineNN(lifelineNnAddr);
  }
  sendLifelineIfDue();
  Thread.sleep(scheduler.getLifelineWaitTime());
} catch (InterruptedException e) {
  Thread.currentThread().interrupt();
} catch (IOException e) {
  LOG.warn("IOException in LifelineSender for " + BPServiceActor.this,
  e);
}
  }
{code}



> IllegalArgumentException in LifelineSender
> --
>
> Key: HDFS-16623
> URL: https://issues.apache.org/jira/browse/HDFS-16623
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> In our production environment, an IllegalArgumentException occurred in the 
> LifelineSender at one DataNode which was undergoing GC at that time. 
> And the bug code is at line 1060 in BPServiceActor.java, because the sleep 
> time is negative.
> {code:java}
> while (shouldRun()) {
>  try {
> if (lifelineNamenode == null) {
>   lifelineNamenode = dn.connectToLifelineNN(lifelineNnAddr);
> }
> sendLifelineIfDue();
> Thread.sleep(scheduler.getLifelineWaitTime());
>   } catch (InterruptedException e) {
> Thread.currentThread().interrupt();
>   } catch (IOException e) {
> LOG.warn("IOException in LifelineSender for " + BPServiceActor.this, 
> e);
>  }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16623) IllegalArgumentException in LifelineSender

2022-06-06 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-16623:

Description: 
In our production environment, an IllegalArgumentException occurred in the 
LifelineSender at one DataNode which was undergoing GC at that time. 
And the bug code is at line 1060 in BPServiceActor.java, because the sleep time 
is negative.

{code:java}
 while (shouldRun()) {
try {
  if (lifelineNamenode == null) {
lifelineNamenode = dn.connectToLifelineNN(lifelineNnAddr);
  }
  sendLifelineIfDue();
  Thread.sleep(scheduler.getLifelineWaitTime());
} catch (InterruptedException e) {
  Thread.currentThread().interrupt();
} catch (IOException e) {
  LOG.warn("IOException in LifelineSender for " + BPServiceActor.this,
  e);
}
  }
{code}


  was:In our production environment, an IllegalArgumentException occurred in 
the LifelineSender at one DataNode, because the DataNode was undergoing GC at 
that time. 


> IllegalArgumentException in LifelineSender
> --
>
> Key: HDFS-16623
> URL: https://issues.apache.org/jira/browse/HDFS-16623
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> In our production environment, an IllegalArgumentException occurred in the 
> LifelineSender at one DataNode which was undergoing GC at that time. 
> And the bug code is at line 1060 in BPServiceActor.java, because the sleep 
> time is negative.
> {code:java}
>  while (shouldRun()) {
> try {
>   if (lifelineNamenode == null) {
> lifelineNamenode = dn.connectToLifelineNN(lifelineNnAddr);
>   }
>   sendLifelineIfDue();
>   Thread.sleep(scheduler.getLifelineWaitTime());
> } catch (InterruptedException e) {
>   Thread.currentThread().interrupt();
> } catch (IOException e) {
>   LOG.warn("IOException in LifelineSender for " + BPServiceActor.this,
>   e);
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16623) IllegalArgumentException in LifelineSender

2022-06-06 Thread ZanderXu (Jira)
ZanderXu created HDFS-16623:
---

 Summary: IllegalArgumentException in LifelineSender
 Key: HDFS-16623
 URL: https://issues.apache.org/jira/browse/HDFS-16623
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: ZanderXu
Assignee: ZanderXu


In our production environment, an IllegalArgumentException occurred in the 
LifelineSender at one DataNode, because the DataNode was undergoing GC at that 
time. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16622) addRDBI in IncrementalBlockReportManager may remove the block with bigger GS.

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16622?focusedWorklogId=778582&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778582
 ]

ASF GitHub Bot logged work on HDFS-16622:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 11:27
Start Date: 06/Jun/22 11:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4407:
URL: https://github.com/apache/hadoop/pull/4407#issuecomment-1147344802

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  3s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 395m  0s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 512m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4407/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4407 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e7745f582308 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 91f7ff3a9989a9a18398cf8c82b1e30492a86bad |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4407/1/testReport/ |
   | Max. process+thread count | 2066 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project

[jira] [Work logged] (HDFS-16618) sync_file_range error should include more volume and file info

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16618?focusedWorklogId=778568&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778568
 ]

ASF GitHub Bot logged work on HDFS-16618:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 10:14
Start Date: 06/Jun/22 10:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4402:
URL: https://github.com/apache/hadoop/pull/4402#issuecomment-1147286805

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 334m 25s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 450m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4402/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4402 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 96ca7b411d7f 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ed59324d8a2bb7388546c781931c32a346d00d7a |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4402/2/testReport/ |
   | Max. process+thread count | 2270 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project

[jira] [Resolved] (HDFS-16608) Fix the link in TestClientProtocolForPipelineRecovery

2022-06-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-16608.
--
Fix Version/s: 3.4.0
   3.3.4
   Resolution: Fixed

Committed to trunk and branch-3.3. Thank you [~samrat007] for your contribution.

> Fix the link in TestClientProtocolForPipelineRecovery
> -
>
> Key: HDFS-16608
> URL: https://issues.apache.org/jira/browse/HDFS-16608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16608) Fix the link in TestClientProtocolForPipelineRecovery

2022-06-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HDFS-16608:


Assignee: Samrat Deb

> Fix the link in TestClientProtocolForPipelineRecovery
> -
>
> Key: HDFS-16608
> URL: https://issues.apache.org/jira/browse/HDFS-16608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16608) Fix the link in TestClientProtocolForPipelineRecovery

2022-06-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-16608:
-
Summary: Fix the link in TestClientProtocolForPipelineRecovery  (was: @Link 
in doc to private variable DataStreamer. pipelineRecoveryCount)

> Fix the link in TestClientProtocolForPipelineRecovery
> -
>
> Key: HDFS-16608
> URL: https://issues.apache.org/jira/browse/HDFS-16608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Samrat Deb
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16608) @Link in doc to private variable DataStreamer. pipelineRecoveryCount

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16608?focusedWorklogId=778554&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778554
 ]

ASF GitHub Bot logged work on HDFS-16608:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 08:58
Start Date: 06/Jun/22 08:58
Worklog Time Spent: 10m 
  Work Description: aajisaka merged PR #4379:
URL: https://github.com/apache/hadoop/pull/4379




Issue Time Tracking
---

Worklog Id: (was: 778554)
Time Spent: 0.5h  (was: 20m)

> @Link in doc to private variable DataStreamer. pipelineRecoveryCount
> 
>
> Key: HDFS-16608
> URL: https://issues.apache.org/jira/browse/HDFS-16608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Samrat Deb
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16608) @Link in doc to private variable DataStreamer. pipelineRecoveryCount

2022-06-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-16608:
-
Component/s: documentation

> @Link in doc to private variable DataStreamer. pipelineRecoveryCount
> 
>
> Key: HDFS-16608
> URL: https://issues.apache.org/jira/browse/HDFS-16608
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Samrat Deb
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16576) Remove unused Imports in Hadoop HDFS project

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16576?focusedWorklogId=778551&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778551
 ]

ASF GitHub Bot logged work on HDFS-16576:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 08:44
Start Date: 06/Jun/22 08:44
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on PR #4389:
URL: https://github.com/apache/hadoop/pull/4389#issuecomment-1147199575

   The change looks good to me. Rerun the precommit job to verify it can 
compile.




Issue Time Tracking
---

Worklog Id: (was: 778551)
Time Spent: 40m  (was: 0.5h)

> Remove unused Imports in Hadoop HDFS project
> 
>
> Key: HDFS-16576
> URL: https://issues.apache.org/jira/browse/HDFS-16576
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> h3. Optimize Imports to keep code clean
>  # Remove any unused imports



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16576) Remove unused Imports in Hadoop HDFS project

2022-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16576?focusedWorklogId=778550&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-778550
 ]

ASF GitHub Bot logged work on HDFS-16576:
-

Author: ASF GitHub Bot
Created on: 06/Jun/22 08:43
Start Date: 06/Jun/22 08:43
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on PR #4389:
URL: https://github.com/apache/hadoop/pull/4389#issuecomment-1147198550

   The change looks good to me. Rerun the precommit job to verify it can 
compile.




Issue Time Tracking
---

Worklog Id: (was: 778550)
Time Spent: 0.5h  (was: 20m)

> Remove unused Imports in Hadoop HDFS project
> 
>
> Key: HDFS-16576
> URL: https://issues.apache.org/jira/browse/HDFS-16576
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> h3. Optimize Imports to keep code clean
>  # Remove any unused imports



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org