[jira] [Resolved] (HDFS-16034) Disk failures exceeding the DFIP threshold do not shutdown datanode

2021-05-21 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-16034.
---
Resolution: Incomplete

> Disk failures exceeding the DFIP threshold do not shutdown datanode
> ---
>
> Key: HDFS-16034
> URL: https://issues.apache.org/jira/browse/HDFS-16034
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.10.1, 3.4.0
> Environment: HDFS-11182.
>Reporter: Kihwal Lee
>Priority: Major
>
> HDFS-11182 made DataNode use the new {{DatasetVolumeChecker}}.  But 
> {{handleVolumeFailures()}} blows up in the middle so it never reaches 
> {{handleDiskError()}}, where the datanode is potentially shutdown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16034) Disk failures exceeding the DFIP threshold do not shutdown datanode

2021-05-21 Thread Kihwal Lee (Jira)
Kihwal Lee created HDFS-16034:
-

 Summary: Disk failures exceeding the DFIP threshold do not 
shutdown datanode
 Key: HDFS-16034
 URL: https://issues.apache.org/jira/browse/HDFS-16034
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.10.1, 3.4.0
 Environment: HDFS-11182.
Reporter: Kihwal Lee


HDFS-11182 made DataNode use the new {{DatasetVolumeChecker}}.  But 
{{handleVolumeFailures()}} blows up in the middle so it never reaches 
{{handleDiskError()}}, where the datanode is potentially shutdown.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13522) Support observer node from Router-Based Federation

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=600628&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600628
 ]

ASF GitHub Bot logged work on HDFS-13522:
-

Author: ASF GitHub Bot
Created on: 21/May/21 20:25
Start Date: 21/May/21 20:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3005:
URL: https://github.com/apache/hadoop/pull/3005#issuecomment-846233389


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  20m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   9m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 29s |  |  the patch passed  |
   | -1 :x: |  compile  |  12m 30s | 
[/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/12/artifact/out/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.  |
   | -1 :x: |  javac  |  12m 30s | 
[/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/12/artifact/out/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.  |
   | +1 :green_heart: |  compile  |  20m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m  0s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/12/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 9 new + 439 unchanged - 1 fixed = 448 total (was 
440)  |
   | +1 :green_heart: |  mvnsite  |   5m  1s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   0m 38s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/12/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt)
 |  hadoop-hdfs-rbf in the patch failed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.  |
   | +1 :green_heart: |  javadoc  |   5m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |  12m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 40s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 384m 55s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/12/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  2

[jira] [Work logged] (HDFS-16032) DFSClient#delete supports Trash

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16032?focusedWorklogId=600627&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600627
 ]

ASF GitHub Bot logged work on HDFS-16032:
-

Author: ASF GitHub Bot
Created on: 21/May/21 20:19
Start Date: 21/May/21 20:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3039:
URL: https://github.com/apache/hadoop/pull/3039#issuecomment-846230648


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 55s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   6m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   5m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 52s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 13s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3039/2/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 1 new + 119 unchanged - 0 fixed = 
120 total (was 119)  |
   | +1 :green_heart: |  mvnsite  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 19s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 332m 41s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3039/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 458m 46s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.cli.TestErasureCodingCLI |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.tools.TestHdfsConfigFields |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3039/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3039 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 259f6e9967c8 4.15.0-128-generic #

[jira] [Work logged] (HDFS-13522) Support observer node from Router-Based Federation

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=600609&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600609
 ]

ASF GitHub Bot logged work on HDFS-13522:
-

Author: ASF GitHub Bot
Created on: 21/May/21 19:45
Start Date: 21/May/21 19:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3005:
URL: https://github.com/apache/hadoop/pull/3005#issuecomment-846208581


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 48s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  22m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   5m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |  11m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  3s |  |  the patch passed  |
   | -1 :x: |  compile  |  15m  3s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/11/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |  15m  3s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/11/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | +1 :green_heart: |  compile  |  22m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  22m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 34s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/11/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 10 new + 439 unchanged - 1 fixed = 449 total (was 
440)  |
   | +1 :green_heart: |  mvnsite  |   5m 38s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  javadoc  |   0m 41s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/11/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs-rbf in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | +1 :green_heart: |  javadoc  |   5m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |  11m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 56s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 41s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 346m 29s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  

[jira] [Work logged] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16033?focusedWorklogId=600571&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600571
 ]

ASF GitHub Bot logged work on HDFS-16033:
-

Author: ASF GitHub Bot
Created on: 21/May/21 18:11
Start Date: 21/May/21 18:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3042:
URL: https://github.com/apache/hadoop/pull/3042#issuecomment-846144487


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  20m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  17m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  5s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3042/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 2 new + 75 
unchanged - 0 fixed = 77 total (was 75)  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 177m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3042/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3042 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ee328c833ac4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a1ec6037d2edb0eb9a0695c56f2e091a1a91c4a8 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/P

[jira] [Work logged] (HDFS-16032) DFSClient#delete supports Trash

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16032?focusedWorklogId=600481&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600481
 ]

ASF GitHub Bot logged work on HDFS-16032:
-

Author: ASF GitHub Bot
Created on: 21/May/21 16:25
Start Date: 21/May/21 16:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3039:
URL: https://github.com/apache/hadoop/pull/3039#issuecomment-846082229


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  9s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   5m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   4m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   4m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 21s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 231m  0s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3039/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 354m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.tools.TestHdfsConfigFields |
   |   | hadoop.cli.TestErasureCodingCLI |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3039/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3039 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 01a9e8ac9b7a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ffef0e3da9697106e162b71c46240a2ae1f2f27f |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jv

[jira] [Work logged] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16033?focusedWorklogId=600432&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600432
 ]

ASF GitHub Bot logged work on HDFS-16033:
-

Author: ASF GitHub Bot
Created on: 21/May/21 15:23
Start Date: 21/May/21 15:23
Worklog Time Spent: 10m 
  Work Description: yikf commented on a change in pull request #3042:
URL: https://github.com/apache/hadoop/pull/3042#discussion_r637005610



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
##
@@ -4004,12 +4004,13 @@ public void cleanUp() {
  * Background action to act on references being removed.
  */
 private static class StatisticsDataReferenceCleaner implements Runnable {
+  private int REF_QUEUE_POLL_TIMEOUT = 100;
   @Override
   public void run() {
 while (!Thread.interrupted()) {
   try {
 StatisticsDataReference ref =
-(StatisticsDataReference)STATS_DATA_REF_QUEUE.remove();

Review comment:
   For reviewer, quote JDK ReferenceQueue#:
   
   ```
   public Reference remove(long timeout)
   throws IllegalArgumentException, InterruptedException
   {
   if (timeout < 0) {
   throw new IllegalArgumentException("Negative timeout value");
   }
   synchronized (lock) {
   Reference r = reallyPoll();
   if (r != null) return r;
   long start = (timeout == 0) ? 0 : System.nanoTime();
   for (;;) {
   lock.wait(timeout);
   r = reallyPoll();
   if (r != null) return r;
   if (timeout != 0) {
   long end = System.nanoTime();
   timeout -= (end - start) / 1000_000;
   if (timeout <= 0) return null;
   start = end;
   }
   }
   }
   }
   
   /**
* Removes the next reference object in this queue, blocking until one
* becomes available.
*
* @return A reference object, blocking until one becomes available
* @throws  InterruptedException  If the wait is interrupted
*/
   public Reference remove() throws InterruptedException {
   return remove(0);
   }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600432)
Time Spent: 1h  (was: 50m)

> Fix issue of the StatisticsDataReferenceCleaner cleanUp
> ---
>
> Key: HDFS-16033
> URL: https://issues.apache.org/jira/browse/HDFS-16033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: yikf
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Cleaner thread will be blocked if we remove reference from ReferenceQueue 
> unless the `queue.enqueue` called.
> 
>     As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
> chain as follow:
>                          *StatisticsDataReferenceCleaner#queue.remove()  ->  
> ReferenceQueue.remove(0)  -> lock.wait(0)*
>     But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
> will be blocked.
>  
> ThreadDump:
> {code:java}
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
> nid=0x2119 in Object.wait() [0x7f7b0023]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at java.lang.Object.wait(Object.java:502)
> at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at 
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16033?focusedWorklogId=600431&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600431
 ]

ASF GitHub Bot logged work on HDFS-16033:
-

Author: ASF GitHub Bot
Created on: 21/May/21 15:22
Start Date: 21/May/21 15:22
Worklog Time Spent: 10m 
  Work Description: yikf removed a comment on pull request #3042:
URL: https://github.com/apache/hadoop/pull/3042#issuecomment-846029637


   For reviewer, quote JDK ReferenceQueue#:
   
   ```
   public Reference remove(long timeout)
   throws IllegalArgumentException, InterruptedException
   {
   if (timeout < 0) {
   throw new IllegalArgumentException("Negative timeout value");
   }
   synchronized (lock) {
   Reference r = reallyPoll();
   if (r != null) return r;
   long start = (timeout == 0) ? 0 : System.nanoTime();
   for (;;) {
   lock.wait(timeout);
   r = reallyPoll();
   if (r != null) return r;
   if (timeout != 0) {
   long end = System.nanoTime();
   timeout -= (end - start) / 1000_000;
   if (timeout <= 0) return null;
   start = end;
   }
   }
   }
   }
   
   /**
* Removes the next reference object in this queue, blocking until one
* becomes available.
*
* @return A reference object, blocking until one becomes available
* @throws  InterruptedException  If the wait is interrupted
*/
   public Reference remove() throws InterruptedException {
   return remove(0);
   }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600431)
Time Spent: 50m  (was: 40m)

> Fix issue of the StatisticsDataReferenceCleaner cleanUp
> ---
>
> Key: HDFS-16033
> URL: https://issues.apache.org/jira/browse/HDFS-16033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: yikf
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Cleaner thread will be blocked if we remove reference from ReferenceQueue 
> unless the `queue.enqueue` called.
> 
>     As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
> chain as follow:
>                          *StatisticsDataReferenceCleaner#queue.remove()  ->  
> ReferenceQueue.remove(0)  -> lock.wait(0)*
>     But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
> will be blocked.
>  
> ThreadDump:
> {code:java}
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
> nid=0x2119 in Object.wait() [0x7f7b0023]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at java.lang.Object.wait(Object.java:502)
> at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at 
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16033?focusedWorklogId=600430&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600430
 ]

ASF GitHub Bot logged work on HDFS-16033:
-

Author: ASF GitHub Bot
Created on: 21/May/21 15:22
Start Date: 21/May/21 15:22
Worklog Time Spent: 10m 
  Work Description: yikf edited a comment on pull request #3042:
URL: https://github.com/apache/hadoop/pull/3042#issuecomment-846029637


   For reviewer, quote JDK ReferenceQueue#:
   
   ```
   public Reference remove(long timeout)
   throws IllegalArgumentException, InterruptedException
   {
   if (timeout < 0) {
   throw new IllegalArgumentException("Negative timeout value");
   }
   synchronized (lock) {
   Reference r = reallyPoll();
   if (r != null) return r;
   long start = (timeout == 0) ? 0 : System.nanoTime();
   for (;;) {
   lock.wait(timeout);
   r = reallyPoll();
   if (r != null) return r;
   if (timeout != 0) {
   long end = System.nanoTime();
   timeout -= (end - start) / 1000_000;
   if (timeout <= 0) return null;
   start = end;
   }
   }
   }
   }
   
   /**
* Removes the next reference object in this queue, blocking until one
* becomes available.
*
* @return A reference object, blocking until one becomes available
* @throws  InterruptedException  If the wait is interrupted
*/
   public Reference remove() throws InterruptedException {
   return remove(0);
   }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600430)
Time Spent: 40m  (was: 0.5h)

> Fix issue of the StatisticsDataReferenceCleaner cleanUp
> ---
>
> Key: HDFS-16033
> URL: https://issues.apache.org/jira/browse/HDFS-16033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: yikf
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Cleaner thread will be blocked if we remove reference from ReferenceQueue 
> unless the `queue.enqueue` called.
> 
>     As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
> chain as follow:
>                          *StatisticsDataReferenceCleaner#queue.remove()  ->  
> ReferenceQueue.remove(0)  -> lock.wait(0)*
>     But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
> will be blocked.
>  
> ThreadDump:
> {code:java}
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
> nid=0x2119 in Object.wait() [0x7f7b0023]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at java.lang.Object.wait(Object.java:502)
> at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at 
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16033?focusedWorklogId=600429&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600429
 ]

ASF GitHub Bot logged work on HDFS-16033:
-

Author: ASF GitHub Bot
Created on: 21/May/21 15:21
Start Date: 21/May/21 15:21
Worklog Time Spent: 10m 
  Work Description: yikf commented on pull request #3042:
URL: https://github.com/apache/hadoop/pull/3042#issuecomment-846029637


   For reviewer:
   
   ```
   public Reference remove(long timeout)
   throws IllegalArgumentException, InterruptedException
   {
   if (timeout < 0) {
   throw new IllegalArgumentException("Negative timeout value");
   }
   synchronized (lock) {
   Reference r = reallyPoll();
   if (r != null) return r;
   long start = (timeout == 0) ? 0 : System.nanoTime();
   for (;;) {
   lock.wait(timeout);
   r = reallyPoll();
   if (r != null) return r;
   if (timeout != 0) {
   long end = System.nanoTime();
   timeout -= (end - start) / 1000_000;
   if (timeout <= 0) return null;
   start = end;
   }
   }
   }
   }
   
   /**
* Removes the next reference object in this queue, blocking until one
* becomes available.
*
* @return A reference object, blocking until one becomes available
* @throws  InterruptedException  If the wait is interrupted
*/
   public Reference remove() throws InterruptedException {
   return remove(0);
   }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600429)
Time Spent: 0.5h  (was: 20m)

> Fix issue of the StatisticsDataReferenceCleaner cleanUp
> ---
>
> Key: HDFS-16033
> URL: https://issues.apache.org/jira/browse/HDFS-16033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: yikf
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Cleaner thread will be blocked if we remove reference from ReferenceQueue 
> unless the `queue.enqueue` called.
> 
>     As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
> chain as follow:
>                          *StatisticsDataReferenceCleaner#queue.remove()  ->  
> ReferenceQueue.remove(0)  -> lock.wait(0)*
>     But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
> will be blocked.
>  
> ThreadDump:
> {code:java}
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
> nid=0x2119 in Object.wait() [0x7f7b0023]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at java.lang.Object.wait(Object.java:502)
> at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at 
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16033?focusedWorklogId=600425&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600425
 ]

ASF GitHub Bot logged work on HDFS-16033:
-

Author: ASF GitHub Bot
Created on: 21/May/21 15:15
Start Date: 21/May/21 15:15
Worklog Time Spent: 10m 
  Work Description: yikf commented on pull request #3042:
URL: https://github.com/apache/hadoop/pull/3042#issuecomment-846025374


   gentle cc @steveloughran Could you help take a look when you have a time, 
please.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600425)
Time Spent: 20m  (was: 10m)

> Fix issue of the StatisticsDataReferenceCleaner cleanUp
> ---
>
> Key: HDFS-16033
> URL: https://issues.apache.org/jira/browse/HDFS-16033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: yikf
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Cleaner thread will be blocked if we remove reference from ReferenceQueue 
> unless the `queue.enqueue` called.
> 
>     As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
> chain as follow:
>                          *StatisticsDataReferenceCleaner#queue.remove()  ->  
> ReferenceQueue.remove(0)  -> lock.wait(0)*
>     But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
> will be blocked.
>  
> ThreadDump:
> {code:java}
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
> nid=0x2119 in Object.wait() [0x7f7b0023]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at java.lang.Object.wait(Object.java:502)
> at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at 
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16033?focusedWorklogId=600421&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600421
 ]

ASF GitHub Bot logged work on HDFS-16033:
-

Author: ASF GitHub Bot
Created on: 21/May/21 15:12
Start Date: 21/May/21 15:12
Worklog Time Spent: 10m 
  Work Description: yikf opened a new pull request #3042:
URL: https://github.com/apache/hadoop/pull/3042


   ### What changes were proposed in this pull request?
   In `StatisticsDataReferenceCleaner`, Cleaner thread will be blocked if we 
remove reference from ReferenceQueue unless the queue.enqueue` called but no 
now.
   
   As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
chain as follow:
   `StatisticsDataReferenceCleaner#queue.remove()  ->  ReferenceQueue.remove(0) 
 -> lock.wait(0)`
   
   lock.wait(0) will waitting perpetual unless lock.notify/notifyAll be called, 
But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread will 
be blocked.
   
   **ThreadDump:**
   ```
   "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
nid=0x2119 in Object.wait() [0x7f7b0023]
  java.lang.Thread.State: WAITING (on object monitor)
   at java.lang.Object.wait(Native Method)
   - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
   at java.lang.Object.wait(Object.java:502)
   at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
   - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
   at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600421)
Remaining Estimate: 0h
Time Spent: 10m

> Fix issue of the StatisticsDataReferenceCleaner cleanUp
> ---
>
> Key: HDFS-16033
> URL: https://issues.apache.org/jira/browse/HDFS-16033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: yikf
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Cleaner thread will be blocked if we remove reference from ReferenceQueue 
> unless the `queue.enqueue` called.
> 
>     As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
> chain as follow:
>                          *StatisticsDataReferenceCleaner#queue.remove()  ->  
> ReferenceQueue.remove(0)  -> lock.wait(0)*
>     But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
> will be blocked.
>  
> ThreadDump:
> {code:java}
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
> nid=0x2119 in Object.wait() [0x7f7b0023]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at java.lang.Object.wait(Object.java:502)
> at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at 
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16033:
--
Labels: pull-request-available  (was: )

> Fix issue of the StatisticsDataReferenceCleaner cleanUp
> ---
>
> Key: HDFS-16033
> URL: https://issues.apache.org/jira/browse/HDFS-16033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: yikf
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Cleaner thread will be blocked if we remove reference from ReferenceQueue 
> unless the `queue.enqueue` called.
> 
>     As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
> chain as follow:
>                          *StatisticsDataReferenceCleaner#queue.remove()  ->  
> ReferenceQueue.remove(0)  -> lock.wait(0)*
>     But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
> will be blocked.
>  
> ThreadDump:
> {code:java}
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
> nid=0x2119 in Object.wait() [0x7f7b0023]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at java.lang.Object.wait(Object.java:502)
> at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at 
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13522) Support observer node from Router-Based Federation

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=600415&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600415
 ]

ASF GitHub Bot logged work on HDFS-13522:
-

Author: ASF GitHub Bot
Created on: 21/May/21 15:09
Start Date: 21/May/21 15:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3005:
URL: https://github.com/apache/hadoop/pull/3005#issuecomment-846020614


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  0s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  20m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m  4s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   5m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   9m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  22m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 53s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/10/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 11 new + 439 unchanged - 1 fixed = 450 total (was 
440)  |
   | +1 :green_heart: |  mvnsite  |   5m  3s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   4m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   5m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |  10m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 37s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 39s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 469m 36s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  28m 47s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  8s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 738m 59s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.TestViewDistributedFileSystemContract |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.cli.TestErasureCodingCLI |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | ha

[jira] [Work logged] (HDFS-13522) Support observer node from Router-Based Federation

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13522?focusedWorklogId=600411&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600411
 ]

ASF GitHub Bot logged work on HDFS-13522:
-

Author: ASF GitHub Bot
Created on: 21/May/21 15:06
Start Date: 21/May/21 15:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3005:
URL: https://github.com/apache/hadoop/pull/3005#issuecomment-846018323


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 59s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  20m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 52s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m  8s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   5m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   9m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  22m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 51s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/9/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 11 new + 439 unchanged - 1 fixed = 450 total (was 
440)  |
   | +1 :green_heart: |  mvnsite  |   4m 59s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   4m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   5m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |  10m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 26s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 35s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 468m 18s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  29m 12s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  8s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 737m 40s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.TestViewDistributedFileSystemContract |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.cli.TestErasureCodingCLI |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hado

[jira] [Work logged] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?focusedWorklogId=600392&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600392
 ]

ASF GitHub Bot logged work on HDFS-15998:
-

Author: ASF GitHub Bot
Created on: 21/May/21 14:33
Start Date: 21/May/21 14:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3036:
URL: https://github.com/apache/hadoop/pull/3036#issuecomment-845992995


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 377m 47s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3036/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 476m 16s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.cli.TestErasureCodingCLI |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3036/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3036 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 1e1ac61cea83 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | g

[jira] [Updated] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread yikf (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yikf updated HDFS-16033:

Description: 
Cleaner thread will be blocked if we remove reference from ReferenceQueue 
unless the `queue.enqueue` called.

    As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
chain as follow:

                         *StatisticsDataReferenceCleaner#queue.remove()  ->  
ReferenceQueue.remove(0)  -> lock.wait(0)*

    But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
will be blocked.

 

ThreadDump:
{code:java}
"Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
nid=0x2119 in Object.wait() [0x7f7b0023]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:502)
at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
- locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at 
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}

  was:
Cleaner thread will be blocked if we remove reference from ReferenceQueue 
unless the `queue.enqueue` called.

    As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
chain as follow:

                         StatisticsDataReferenceCleaner#queue.remove()  ->  
ReferenceQueue.remove(0)  -> lock.wait(0)

    But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
will be blocked.

 

ThreadDump:
{code:java}
"Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
nid=0x2119 in Object.wait() [0x7f7b0023]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:502)
at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
- locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at 
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}


> Fix issue of the StatisticsDataReferenceCleaner cleanUp
> ---
>
> Key: HDFS-16033
> URL: https://issues.apache.org/jira/browse/HDFS-16033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: yikf
>Priority: Minor
>
> Cleaner thread will be blocked if we remove reference from ReferenceQueue 
> unless the `queue.enqueue` called.
> 
>     As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
> chain as follow:
>                          *StatisticsDataReferenceCleaner#queue.remove()  ->  
> ReferenceQueue.remove(0)  -> lock.wait(0)*
>     But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
> will be blocked.
>  
> ThreadDump:
> {code:java}
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
> nid=0x2119 in Object.wait() [0x7f7b0023]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at java.lang.Object.wait(Object.java:502)
> at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at 
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread yikf (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yikf updated HDFS-16033:

Description: 
Cleaner thread will be blocked if we remove reference from ReferenceQueue 
unless the `queue.enqueue` called.

    As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
chain as follow:

                         StatisticsDataReferenceCleaner#queue.remove()  ->  
ReferenceQueue.remove(0)  -> lock.wait(0)

    But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
will be blocked.

 

ThreadDump:
{code:java}
"Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
nid=0x2119 in Object.wait() [0x7f7b0023]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:502)
at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
- locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at 
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}

  was:
Cleaner thread will be blocked if we remove reference from ReferenceQueue 
unless the `queue.enqueue` called.

As shown below, We call ReferenceQueue.remove() now while cleanUp, Call chain 
as follow:

StatisticsDataReferenceCleaner#queue.remove()  ->  ReferenceQueue.remove(0)  -> 
lock.wait(0)

But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread will 
be blocked.

ThreadDump:
{code:java}
"Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
nid=0x2119 in Object.wait() [0x7f7b0023]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:502)
at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
- locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at 
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}


> Fix issue of the StatisticsDataReferenceCleaner cleanUp
> ---
>
> Key: HDFS-16033
> URL: https://issues.apache.org/jira/browse/HDFS-16033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: yikf
>Priority: Minor
>
> Cleaner thread will be blocked if we remove reference from ReferenceQueue 
> unless the `queue.enqueue` called.
> 
>     As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
> chain as follow:
>                          StatisticsDataReferenceCleaner#queue.remove()  ->  
> ReferenceQueue.remove(0)  -> lock.wait(0)
>     But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
> will be blocked.
>  
> ThreadDump:
> {code:java}
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
> nid=0x2119 in Object.wait() [0x7f7b0023]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at java.lang.Object.wait(Object.java:502)
> at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at 
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread yikf (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yikf updated HDFS-16033:

Description: 
Cleaner thread will be blocked if we remove reference from ReferenceQueue 
unless the `queue.enqueue` called.

As shown below, We call ReferenceQueue.remove() now while cleanUp, Call chain 
as follow:

StatisticsDataReferenceCleaner#queue.remove()  ->  ReferenceQueue.remove(0)  -> 
lock.wait(0)

But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread will 
be blocked.

ThreadDump:
{code:java}
"Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
nid=0x2119 in Object.wait() [0x7f7b0023]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:502)
at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
- locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at 
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}

  was:
Cleaner thread will be blocked if we remove ref from ReferenceQueue unless the 
`queue.enqueue` called.

ThreadDump:
{code:java}
"Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
nid=0x2119 in Object.wait() [0x7f7b0023]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:502)
at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
- locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at 
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}


> Fix issue of the StatisticsDataReferenceCleaner cleanUp
> ---
>
> Key: HDFS-16033
> URL: https://issues.apache.org/jira/browse/HDFS-16033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: yikf
>Priority: Minor
>
> Cleaner thread will be blocked if we remove reference from ReferenceQueue 
> unless the `queue.enqueue` called.
> As shown below, We call ReferenceQueue.remove() now while cleanUp, Call chain 
> as follow:
> StatisticsDataReferenceCleaner#queue.remove()  ->  ReferenceQueue.remove(0)  
> -> lock.wait(0)
> But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread will 
> be blocked.
> ThreadDump:
> {code:java}
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
> nid=0x2119 in Object.wait() [0x7f7b0023]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at java.lang.Object.wait(Object.java:502)
> at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at 
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16033) Fix issue of the StatisticsDataReferenceCleaner cleanUp

2021-05-21 Thread yikf (Jira)
yikf created HDFS-16033:
---

 Summary: Fix issue of the StatisticsDataReferenceCleaner cleanUp
 Key: HDFS-16033
 URL: https://issues.apache.org/jira/browse/HDFS-16033
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.2.1
Reporter: yikf


Cleaner thread will be blocked if we remove ref from ReferenceQueue unless the 
`queue.enqueue` called.

ThreadDump:
{code:java}
"Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
nid=0x2119 in Object.wait() [0x7f7b0023]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:502)
at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
- locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
at 
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?focusedWorklogId=600298&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600298
 ]

ASF GitHub Bot logged work on HDFS-15998:
-

Author: ASF GitHub Bot
Created on: 21/May/21 11:33
Start Date: 21/May/21 11:33
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3036:
URL: https://github.com/apache/hadoop/pull/3036#discussion_r636847332



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
##
@@ -300,8 +300,13 @@ synchronized long getNumUnderConstructionBlocks() {
 Iterator inodeIdIterator = inodeIds.iterator();
 while (inodeIdIterator.hasNext()) {
   Long inodeId = inodeIdIterator.next();
-  final INodeFile inodeFile =
-  fsnamesystem.getFSDirectory().getInode(inodeId).asFile();
+  INode ucFile = fsnamesystem.getFSDirectory().getInode(inodeId);
+  if (ucFile == null) {
+//probably got deleted

Review comment:
   this is possible too.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600298)
Time Spent: 0.5h  (was: 20m)

> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
> NPE.
> hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
> -path /xxx
>  
> {quote}
>  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   ...
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.

[jira] [Work logged] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?focusedWorklogId=600297&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600297
 ]

ASF GitHub Bot logged work on HDFS-15998:
-

Author: ASF GitHub Bot
Created on: 21/May/21 11:27
Start Date: 21/May/21 11:27
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3036:
URL: https://github.com/apache/hadoop/pull/3036#discussion_r636843883



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -2006,8 +2006,14 @@ private void metaSave(PrintWriter out) {
   continue;
 }
 Preconditions.checkState(ucFile instanceof INodeFile);
-openFileIds.add(ucFileId);
+
 INodeFile inodeFile = ucFile.asFile();

Review comment:
   the set `dataNode.getLeavingServiceStatus().getOpenFiles()` is 
maintained by the decommissioning admin thread, so race condition is possible.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -2006,8 +2006,14 @@ private void metaSave(PrintWriter out) {
   continue;
 }
 Preconditions.checkState(ucFile instanceof INodeFile);
-openFileIds.add(ucFileId);
+
 INodeFile inodeFile = ucFile.asFile();

Review comment:
   so yeah, it's possible.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600297)
Time Spent: 20m  (was: 10m)

> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
> NPE.
> hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
> -path /xxx
>  
> {quote}
>  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   ...
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmi

[jira] [Updated] (HDFS-16032) DFSClient#delete supports Trash

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16032:
--
Labels: pull-request-available  (was: )

>  DFSClient#delete supports Trash
> 
>
> Key: HDFS-16032
> URL: https://issues.apache.org/jira/browse/HDFS-16032
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hadoop-client, hdfs
>Affects Versions: 3.4.0
>Reporter: zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, HDFS can only move deleted data to Trash through Shell commands. 
> In actual scenarios, most of the data is deleted through DFSClient Api. I 
> think it should support Trash.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16032) DFSClient#delete supports Trash

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16032?focusedWorklogId=600275&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600275
 ]

ASF GitHub Bot logged work on HDFS-16032:
-

Author: ASF GitHub Bot
Created on: 21/May/21 10:29
Start Date: 21/May/21 10:29
Worklog Time Spent: 10m 
  Work Description: zhuxiangyi opened a new pull request #3039:
URL: https://github.com/apache/hadoop/pull/3039


   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600275)
Remaining Estimate: 0h
Time Spent: 10m

>  DFSClient#delete supports Trash
> 
>
> Key: HDFS-16032
> URL: https://issues.apache.org/jira/browse/HDFS-16032
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hadoop-client, hdfs
>Affects Versions: 3.4.0
>Reporter: zhu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, HDFS can only move deleted data to Trash through Shell commands. 
> In actual scenarios, most of the data is deleted through DFSClient Api. I 
> think it should support Trash.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16032) DFSClient#delete supports Trash

2021-05-21 Thread zhu (Jira)
zhu created HDFS-16032:
--

 Summary:  DFSClient#delete supports Trash
 Key: HDFS-16032
 URL: https://issues.apache.org/jira/browse/HDFS-16032
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hadoop-client, hdfs
Affects Versions: 3.4.0
Reporter: zhu


Currently, HDFS can only move deleted data to Trash through Shell commands. In 
actual scenarios, most of the data is deleted through DFSClient Api. I think it 
should support Trash.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16026) Restore cross platform mkstemp

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16026?focusedWorklogId=600227&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600227
 ]

ASF GitHub Bot logged work on HDFS-16026:
-

Author: ASF GitHub Bot
Created on: 21/May/21 08:21
Start Date: 21/May/21 08:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3014:
URL: https://github.com/apache/hadoop/pull/3014#issuecomment-845762260


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/22/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600227)
Time Spent: 4h 10m  (was: 4h)

> Restore cross platform mkstemp
> --
>
> Key: HDFS-16026
> URL: https://issues.apache.org/jira/browse/HDFS-16026
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16026) Restore cross platform mkstemp

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16026?focusedWorklogId=600224&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600224
 ]

ASF GitHub Bot logged work on HDFS-16026:
-

Author: ASF GitHub Bot
Created on: 21/May/21 08:06
Start Date: 21/May/21 08:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3014:
URL: https://github.com/apache/hadoop/pull/3014#issuecomment-845748116


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   2m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  cc  |   2m 58s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 58s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  cc  |   2m 54s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 54s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -1 :x: |  hadolint  |   0m  4s | 
[/results-hadolint.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/20/artifact/out/results-hadolint.txt)
 |  The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  32m 26s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 110m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3014 |
   | Optional Tests | dupname asflicense codespell shellcheck shelldocs 
hadolint compile cc mvnsite javac unit golang |
   | uname | Linux 334ded7dc174 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 769411e0935ad6547b0e4573446fecf7034fb7e9 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/20/testReport/ |
   | Max. process+thread count | 544 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/20/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an a

[jira] [Work logged] (HDFS-16024) RBF: Rename data to the Trash should be based on src locations

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16024?focusedWorklogId=600223&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600223
 ]

ASF GitHub Bot logged work on HDFS-16024:
-

Author: ASF GitHub Bot
Created on: 21/May/21 08:04
Start Date: 21/May/21 08:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3009:
URL: https://github.com/apache/hadoop/pull/3009#issuecomment-845747064


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3009/12/artifact/out/blanks-eol.txt)
 |  The patch has 5 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 37s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  24m 47s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 111m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3009/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3009 |
   | JIRA Issue | HDFS-16024 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux d7e45bea4a4e 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b0f5d74bc4c9aa36988b2bc5baeec8ec6834c079 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3009/12/testReport/ |
   | Max. process+thread count | 2274 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Con

[jira] [Commented] (HDFS-16024) RBF: Rename data to the Trash should be based on src locations

2021-05-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17349063#comment-17349063
 ] 

Hadoop QA commented on HDFS-16024:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
6s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} codespell {color} | {color:blue}  0m  
1s{color} |  | {color:blue} codespell was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 2 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 36m 
13s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  1m 
34s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 39s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} |  | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} blanks {color} | {color:red}  0m  
0s{color} | 
[/blanks-eol.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3009/12/artifact/out/blanks-eol.txt]
 | {color:red} The patch has 5 line(s) that end in blanks. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  1m 
37s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 45s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:green}+1{color} | {color:green} unit {color} | {color:gre

[jira] [Work logged] (HDFS-16024) RBF: Rename data to the Trash should be based on src locations

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16024?focusedWorklogId=600220&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600220
 ]

ASF GitHub Bot logged work on HDFS-16024:
-

Author: ASF GitHub Bot
Created on: 21/May/21 07:53
Start Date: 21/May/21 07:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3009:
URL: https://github.com/apache/hadoop/pull/3009#issuecomment-845740417






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600220)
Time Spent: 5.5h  (was: 5h 20m)

> RBF: Rename data to the Trash should be based on src locations
> --
>
> Key: HDFS-16024
> URL: https://issues.apache.org/jira/browse/HDFS-16024
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: zhu
>Assignee: zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> 1.When deleting data to the Trash without configuring a mount point for the 
> Trash, the Router should recognize and move the data to the Trash
> 2.When the user’s trash can is configured with a mount point and is different 
> from the NS of the deleted directory, the router should identify and move the 
> data to the trash can of the current user of src
> The same is true for using ViewFs mount points, I think we should be 
> consistent with it



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16024) RBF: Rename data to the Trash should be based on src locations

2021-05-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17349055#comment-17349055
 ] 

Hadoop QA commented on HDFS-16024:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} codespell {color} | {color:blue}  0m  
1s{color} |  | {color:blue} codespell was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 2 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
 6s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  1m 
18s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 22s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  1m 
15s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 59s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
22s{color} |  | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} |  | {color:gree

[jira] [Commented] (HDFS-16024) RBF: Rename data to the Trash should be based on src locations

2021-05-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17349056#comment-17349056
 ] 

Hadoop QA commented on HDFS-16024:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} codespell {color} | {color:blue}  0m  
0s{color} |  | {color:blue} codespell was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 2 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 36m 
 4s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  1m 
19s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  8s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  1m 
21s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 17s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
49s{color} |  | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} |  | {color:gree