[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782585=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782585
 ]

ASF GitHub Bot logged work on HDFS-16634:
-

Author: ASF GitHub Bot
Created on: 18/Jun/22 05:16
Start Date: 18/Jun/22 05:16
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on code in PR #4448:
URL: https://github.com/apache/hadoop/pull/4448#discussion_r900699519


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SlowPeerTracker.java:
##
@@ -80,7 +80,7 @@ public class SlowPeerTracker {
* Number of nodes to include in JSON report. We will return nodes with
* the highest number of votes from peers.
*/
-  private final int maxNodesToReport;
+  private int maxNodesToReport;

Review Comment:
   Yeah it's fine I think, this is not a big concern either ways. So let me 
make this change.





Issue Time Tracking
---

Worklog Id: (was: 782585)
Time Spent: 1h  (was: 50m)

> Dynamically adjust slow peer report size on JMX metrics
> ---
>
> Key: HDFS-16634
> URL: https://issues.apache.org/jira/browse/HDFS-16634
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> On a busy cluster, sometimes it takes bit of time for deleted node(from the 
> cluster)'s "slow node report" to get removed from slow peer json report on 
> Namenode JMX metrics. In the meantime, user should be able to browse through 
> more entries in the report by adjusting i.e. reconfiguring 
> "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted 
> without user having to bounce active Namenode just for this purpose.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782584=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782584
 ]

ASF GitHub Bot logged work on HDFS-16634:
-

Author: ASF GitHub Bot
Created on: 18/Jun/22 05:05
Start Date: 18/Jun/22 05:05
Worklog Time Spent: 10m 
  Work Description: tomscut commented on code in PR #4448:
URL: https://github.com/apache/hadoop/pull/4448#discussion_r900698693


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SlowPeerTracker.java:
##
@@ -80,7 +80,7 @@ public class SlowPeerTracker {
* Number of nodes to include in JSON report. We will return nodes with
* the highest number of votes from peers.
*/
-  private final int maxNodesToReport;
+  private int maxNodesToReport;

Review Comment:
   This field is almost always read-only and rarely changed, and for read-only 
operations, it doesn't have much impact. WDYT?





Issue Time Tracking
---

Worklog Id: (was: 782584)
Time Spent: 50m  (was: 40m)

> Dynamically adjust slow peer report size on JMX metrics
> ---
>
> Key: HDFS-16634
> URL: https://issues.apache.org/jira/browse/HDFS-16634
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> On a busy cluster, sometimes it takes bit of time for deleted node(from the 
> cluster)'s "slow node report" to get removed from slow peer json report on 
> Namenode JMX metrics. In the meantime, user should be able to browse through 
> more entries in the report by adjusting i.e. reconfiguring 
> "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted 
> without user having to bounce active Namenode just for this purpose.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?focusedWorklogId=782580=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782580
 ]

ASF GitHub Bot logged work on HDFS-16633:
-

Author: ASF GitHub Bot
Created on: 18/Jun/22 04:25
Start Date: 18/Jun/22 04:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4452:
URL: https://github.com/apache/hadoop/pull/4452#issuecomment-1159358712

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 11s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 59s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  2s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 35s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  5s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 76 unchanged - 
0 fixed = 77 total (was 76)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  4s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   3m 55s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/2/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  29m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 353m 25s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 483m 41s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | 

[jira] [Work logged] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?focusedWorklogId=782563=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782563
 ]

ASF GitHub Bot logged work on HDFS-16631:
-

Author: ASF GitHub Bot
Created on: 18/Jun/22 02:40
Start Date: 18/Jun/22 02:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4438:
URL: https://github.com/apache/hadoop/pull/4438#issuecomment-1159345763

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  69m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shadedclient  |  22m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 26s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  99m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4438/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4438 |
   | Optional Tests | dupname asflicense unit codespell detsecrets xmllint |
   | uname | Linux f79c1a23757e 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5dac9ee7e1a8bd62849eba4ec2813f5f8921bb87 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4438/4/testReport/ |
   | Max. process+thread count | 524 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4438/4/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 782563)
Time Spent: 2.5h  (was: 2h 20m)

> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2022-06-18-09-49-28-725.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
> deadlock, this is a very meaningful discussion, I was reading the log and 
> found the following:
> {code:java}
> 2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
> datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
>  not open lock leak check func.{code}
> Looking at the code, I found that there is such a parameter:
> {code:java}
> 
>     dfs.datanode.lockmanager.trace
>     false
>     
>       If this is true, after shut down datanode lock Manager will print all 
> leak
>       thread that not release by lock Manager. Only used for test or trace 
> dead lock
>       problem. In produce default set false, because it's have little 
> performance loss.
>     
>    {code}
> I think this parameter should be added in the test environment, so that if 
> there is a DN deadlock, the cause can be quickly located.
> According to suggestions, the following modifications are made:
> 1. On the read and write lock related 

[jira] [Work logged] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?focusedWorklogId=782562=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782562
 ]

ASF GitHub Bot logged work on HDFS-16631:
-

Author: ASF GitHub Bot
Created on: 18/Jun/22 02:00
Start Date: 18/Jun/22 02:00
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4438:
URL: https://github.com/apache/hadoop/pull/4438#issuecomment-1159337745

   
   readLock
   ```
   getVolume(final ExtendedBlock b)
   getStoredBlock(String bpid, long blkid)
   Set deepCopyReplica(String bpid)
   getBlockInputStream(ExtendedBlock b, long seekOffset)
   moveBlockAcrossStorage(ExtendedBlock block, StorageType targetStorageType, 
String targetStorageId)
   moveBlockAcrossVolumes(ExtendedBlock block, FsVolumeSpi destination)
   ReplicaHandler createRbw(StorageType storageType, String storageId, 
ExtendedBlock b, boolean allowLazyPersist)
   Map getBlockReports(String bpid)
   public List getFinalizedBlocks(String bpid)
   public boolean contains(final ExtendedBlock block)
   public String getReplicaString(String bpid, long blockId)
   public long getReplicaVisibleLength(final ExtendedBlock block)
   public BlockLocalPathInfo getBlockLocalPathInfo(ExtendedBlock block)
   ```




Issue Time Tracking
---

Worklog Id: (was: 782562)
Time Spent: 2h 20m  (was: 2h 10m)

> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2022-06-18-09-49-28-725.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
> deadlock, this is a very meaningful discussion, I was reading the log and 
> found the following:
> {code:java}
> 2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
> datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
>  not open lock leak check func.{code}
> Looking at the code, I found that there is such a parameter:
> {code:java}
> 
>     dfs.datanode.lockmanager.trace
>     false
>     
>       If this is true, after shut down datanode lock Manager will print all 
> leak
>       thread that not release by lock Manager. Only used for test or trace 
> dead lock
>       problem. In produce default set false, because it's have little 
> performance loss.
>     
>    {code}
> I think this parameter should be added in the test environment, so that if 
> there is a DN deadlock, the cause can be quickly located.
> According to suggestions, the following modifications are made:
> 1. On the read and write lock related methods of DataSetLockManager, add the 
> operation name to clearly indicate the source of the lock, which is 
> convenient for public use.
> 2. Increase the granularity of indicator monitoring, including the number of 
> locks, the time of locks, and the early warning of locks.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-17 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16631:
-
Attachment: image-2022-06-18-09-49-28-725.png

> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2022-06-18-09-49-28-725.png
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
> deadlock, this is a very meaningful discussion, I was reading the log and 
> found the following:
> {code:java}
> 2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
> datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
>  not open lock leak check func.{code}
> Looking at the code, I found that there is such a parameter:
> {code:java}
> 
>     dfs.datanode.lockmanager.trace
>     false
>     
>       If this is true, after shut down datanode lock Manager will print all 
> leak
>       thread that not release by lock Manager. Only used for test or trace 
> dead lock
>       problem. In produce default set false, because it's have little 
> performance loss.
>     
>    {code}
> I think this parameter should be added in the test environment, so that if 
> there is a DN deadlock, the cause can be quickly located.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-17 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun updated HDFS-16631:
-
Description: 
In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
deadlock, this is a very meaningful discussion, I was reading the log and found 
the following:
{code:java}
2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
 not open lock leak check func.{code}
Looking at the code, I found that there is such a parameter:
{code:java}

    dfs.datanode.lockmanager.trace
    false
    
      If this is true, after shut down datanode lock Manager will print all leak
      thread that not release by lock Manager. Only used for test or trace dead 
lock
      problem. In produce default set false, because it's have little 
performance loss.
    
   {code}
I think this parameter should be added in the test environment, so that if 
there is a DN deadlock, the cause can be quickly located.

According to suggestions, the following modifications are made:

1. On the read and write lock related methods of DataSetLockManager, add the 
operation name to clearly indicate the source of the lock, which is convenient 
for public use.
2. Increase the granularity of indicator monitoring, including the number of 
locks, the time of locks, and the early warning of locks.

 

  was:
In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
deadlock, this is a very meaningful discussion, I was reading the log and found 
the following:
{code:java}
2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
 not open lock leak check func.{code}
Looking at the code, I found that there is such a parameter:
{code:java}

    dfs.datanode.lockmanager.trace
    false
    
      If this is true, after shut down datanode lock Manager will print all leak
      thread that not release by lock Manager. Only used for test or trace dead 
lock
      problem. In produce default set false, because it's have little 
performance loss.
    
   {code}
I think this parameter should be added in the test environment, so that if 
there is a DN deadlock, the cause can be quickly located.

 


> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2022-06-18-09-49-28-725.png
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
> deadlock, this is a very meaningful discussion, I was reading the log and 
> found the following:
> {code:java}
> 2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
> datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
>  not open lock leak check func.{code}
> Looking at the code, I found that there is such a parameter:
> {code:java}
> 
>     dfs.datanode.lockmanager.trace
>     false
>     
>       If this is true, after shut down datanode lock Manager will print all 
> leak
>       thread that not release by lock Manager. Only used for test or trace 
> dead lock
>       problem. In produce default set false, because it's have little 
> performance loss.
>     
>    {code}
> I think this parameter should be added in the test environment, so that if 
> there is a DN deadlock, the cause can be quickly located.
> According to suggestions, the following modifications are made:
> 1. On the read and write lock related methods of DataSetLockManager, add the 
> operation name to clearly indicate the source of the lock, which is 
> convenient for public use.
> 2. Increase the granularity of indicator monitoring, including the number of 
> locks, the time of locks, and the early warning of locks.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782561=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782561
 ]

ASF GitHub Bot logged work on HDFS-16634:
-

Author: ASF GitHub Bot
Created on: 18/Jun/22 01:47
Start Date: 18/Jun/22 01:47
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on code in PR #4448:
URL: https://github.com/apache/hadoop/pull/4448#discussion_r900656012


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SlowPeerTracker.java:
##
@@ -80,7 +80,7 @@ public class SlowPeerTracker {
* Number of nodes to include in JSON report. We will return nodes with
* the highest number of votes from peers.
*/
-  private final int maxNodesToReport;
+  private int maxNodesToReport;

Review Comment:
   Yes, you have made good point @tomscut, this can be done to remain in line 
with other reconfig changes, however it might cause bit of a performance issue 
for JMX metrics API overall, hence I was bit reluctant to make the change. But 
if you have strong preference, I can make the change, WDYT?





Issue Time Tracking
---

Worklog Id: (was: 782561)
Time Spent: 40m  (was: 0.5h)

> Dynamically adjust slow peer report size on JMX metrics
> ---
>
> Key: HDFS-16634
> URL: https://issues.apache.org/jira/browse/HDFS-16634
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> On a busy cluster, sometimes it takes bit of time for deleted node(from the 
> cluster)'s "slow node report" to get removed from slow peer json report on 
> Namenode JMX metrics. In the meantime, user should be able to browse through 
> more entries in the report by adjusting i.e. reconfiguring 
> "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted 
> without user having to bounce active Namenode just for this purpose.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?focusedWorklogId=782553=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782553
 ]

ASF GitHub Bot logged work on HDFS-16633:
-

Author: ASF GitHub Bot
Created on: 18/Jun/22 00:24
Start Date: 18/Jun/22 00:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4452:
URL: https://github.com/apache/hadoop/pull/4452#issuecomment-1159316943

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 20s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 58s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   3m 35s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/1/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  25m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 353m 14s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 18s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 470m 45s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  instanceof will always return true for all non-null values in 

[jira] [Work logged] (HDFS-16591) StateStoreZooKeeper fails to initialize

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16591?focusedWorklogId=782542=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782542
 ]

ASF GitHub Bot logged work on HDFS-16591:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 22:52
Start Date: 17/Jun/22 22:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4447:
URL: https://github.com/apache/hadoop/pull/4447#issuecomment-1159289820

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 28s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  7s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 21s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  8s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  |  hadoop-common-project: 
The patch generated 0 new + 203 unchanged - 2 fixed = 203 total (was 205)  |
   | +1 :green_heart: |  mvnsite  |   4m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 11s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   4m  2s |  |  hadoop-auth in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  18m 33s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 58s |  |  hadoop-registry in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 247m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4447/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4447 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 005c31bc321f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1336ceeb0c079aeaaf119433d57afe442f677b02 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 

[jira] [Work logged] (HDFS-16619) Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?focusedWorklogId=782517=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782517
 ]

ASF GitHub Bot logged work on HDFS-16619:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 19:00
Start Date: 17/Jun/22 19:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4406:
URL: https://github.com/apache/hadoop/pull/4406#issuecomment-1159153518

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 24s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/7/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 27s |  |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 
911 unchanged - 26 fixed = 911 total (was 937)  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new 
+ 890 unchanged - 26 fixed = 890 total (was 916)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 57s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/7/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 253m 14s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 365m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.cli.TestHDFSCLI |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | 

[jira] [Work logged] (HDFS-16591) StateStoreZooKeeper fails to initialize

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16591?focusedWorklogId=782516=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782516
 ]

ASF GitHub Bot logged work on HDFS-16591:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 18:56
Start Date: 17/Jun/22 18:56
Worklog Time Spent: 10m 
  Work Description: hchaverri commented on code in PR #4447:
URL: https://github.com/apache/hadoop/pull/4447#discussion_r900432976


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java:
##
@@ -52,6 +52,7 @@
 import org.apache.hadoop.classification.InterfaceStability.Unstable;

Review Comment:
   Thanks. I've removed these and other unused imports in RegistrySecurity



##
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/JaasConfiguration.java:
##
@@ -0,0 +1,77 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.util;
+
+import java.util.HashMap;
+import java.util.Map;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+
+
+/**
+ * Creates a programmatic version of a jaas.conf file. This can be used
+ * instead of writing a jaas.conf file and setting the system property,
+ * "java.security.auth.login.config", to point to that file. It is meant to be
+ * used for connecting to ZooKeeper.
+ */
+public class JaasConfiguration extends Configuration {
+
+  private final javax.security.auth.login.Configuration baseConfig =
+  javax.security.auth.login.Configuration.getConfiguration();
+  private static AppConfigurationEntry[] entry;

Review Comment:
   Not sure why it's been declared static in all the duplicate classes, but I 
agree it should be non-static



##
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/JaasConfiguration.java:
##
@@ -0,0 +1,77 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.util;
+
+import java.util.HashMap;
+import java.util.Map;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+
+
+/**
+ * Creates a programmatic version of a jaas.conf file. This can be used
+ * instead of writing a jaas.conf file and setting the system property,
+ * "java.security.auth.login.config", to point to that file. It is meant to be
+ * used for connecting to ZooKeeper.
+ */
+public class JaasConfiguration extends Configuration {
+
+  private final javax.security.auth.login.Configuration baseConfig =
+  javax.security.auth.login.Configuration.getConfiguration();
+  private static AppConfigurationEntry[] entry;

Review Comment:
   This method is being overriden from javax.security.auth.login.Configuration 
so its naming/signature can't be changed. This method is expected to return a 
single element if a match with the same name is found. It can still return 
multiple entries if there is a match in the baseConfig. I'll add a comment to 
clarify





Issue Time Tracking
---

Worklog Id: (was: 782516)
Time Spent: 1h  (was: 50m)

> StateStoreZooKeeper fails to initialize
> ---
>
> Key: HDFS-16591
> URL: https://issues.apache.org/jira/browse/HDFS-16591
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> MembershipStore and MountTableStore are failing to initialize, logging the 
> 

[jira] [Work logged] (HDFS-16619) Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?focusedWorklogId=782515=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782515
 ]

ASF GitHub Bot logged work on HDFS-16619:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 18:52
Start Date: 17/Jun/22 18:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4406:
URL: https://github.com/apache/hadoop/pull/4406#issuecomment-1159149251

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 41s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 47s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 21s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/6/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 27s |  |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 
911 unchanged - 26 fixed = 911 total (was 937)  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 0 new 
+ 890 unchanged - 26 fixed = 890 total (was 916)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 57s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/6/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 249m 27s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4406/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 359m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.cli.TestHDFSCLI |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | 

[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782508=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782508
 ]

ASF GitHub Bot logged work on HDFS-16634:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 18:42
Start Date: 17/Jun/22 18:42
Worklog Time Spent: 10m 
  Work Description: tomscut commented on code in PR #4448:
URL: https://github.com/apache/hadoop/pull/4448#discussion_r900424477


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SlowPeerTracker.java:
##
@@ -80,7 +80,7 @@ public class SlowPeerTracker {
* Number of nodes to include in JSON report. We will return nodes with
* the highest number of votes from peers.
*/
-  private final int maxNodesToReport;
+  private int maxNodesToReport;

Review Comment:
   Please set this to `volatile`. Although it doesn't make a big difference 
here, I think it's better to be consistent with other reconfig changes. What do 
you think of this?





Issue Time Tracking
---

Worklog Id: (was: 782508)
Time Spent: 0.5h  (was: 20m)

> Dynamically adjust slow peer report size on JMX metrics
> ---
>
> Key: HDFS-16634
> URL: https://issues.apache.org/jira/browse/HDFS-16634
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> On a busy cluster, sometimes it takes bit of time for deleted node(from the 
> cluster)'s "slow node report" to get removed from slow peer json report on 
> Namenode JMX metrics. In the meantime, user should be able to browse through 
> more entries in the report by adjusting i.e. reconfiguring 
> "dfs.datanode.max.nodes.to.report" so that the list size can be adjusted 
> without user having to bounce active Namenode just for this purpose.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16591) StateStoreZooKeeper fails to initialize

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16591?focusedWorklogId=782492=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782492
 ]

ASF GitHub Bot logged work on HDFS-16591:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 17:57
Start Date: 17/Jun/22 17:57
Worklog Time Spent: 10m 
  Work Description: simbadzina commented on code in PR #4447:
URL: https://github.com/apache/hadoop/pull/4447#discussion_r900396075


##
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/JaasConfiguration.java:
##
@@ -0,0 +1,77 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.util;
+
+import java.util.HashMap;
+import java.util.Map;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+
+
+/**
+ * Creates a programmatic version of a jaas.conf file. This can be used
+ * instead of writing a jaas.conf file and setting the system property,
+ * "java.security.auth.login.config", to point to that file. It is meant to be
+ * used for connecting to ZooKeeper.
+ */
+public class JaasConfiguration extends Configuration {
+
+  private final javax.security.auth.login.Configuration baseConfig =
+  javax.security.auth.login.Configuration.getConfiguration();
+  private static AppConfigurationEntry[] entry;

Review Comment:
   Can you add a comment that this array will only every contain one element? 
   
   The naming and signature of **_AppConfigurationEntry[] 
getAppConfigurationEntry(String name)_** is confusing, but that's used in too 
many places to change as part of this RB.





Issue Time Tracking
---

Worklog Id: (was: 782492)
Time Spent: 50m  (was: 40m)

> StateStoreZooKeeper fails to initialize
> ---
>
> Key: HDFS-16591
> URL: https://issues.apache.org/jira/browse/HDFS-16591
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> MembershipStore and MountTableStore are failing to initialize, logging the 
> following errors on the Router logs:
> {noformat}
> 2022-05-23 16:43:01,156 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService: 
> Cannot get version for class 
> org.apache.hadoop.hdfs.server.federation.store.MembershipStore
> org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableException:
>  Cached State Store not initialized, MembershipState records not valid
>   at 
> org.apache.hadoop.hdfs.server.federation.store.CachedRecordStore.checkCacheAvailable(CachedRecordStore.java:106)
>   at 
> org.apache.hadoop.hdfs.server.federation.store.CachedRecordStore.getCachedRecords(CachedRecordStore.java:227)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.getStateStoreVersion(RouterHeartbeatService.java:131)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:92)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.periodicInvoke(RouterHeartbeatService.java:159)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748){noformat}
> After investigating, we noticed that ZKDelegationTokenSecretManager normally 
> initializes properties for ZooKeeper clients to connect using SASL/Kerberos. 
> If 

[jira] [Work logged] (HDFS-16591) StateStoreZooKeeper fails to initialize

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16591?focusedWorklogId=782489=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782489
 ]

ASF GitHub Bot logged work on HDFS-16591:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 17:52
Start Date: 17/Jun/22 17:52
Worklog Time Spent: 10m 
  Work Description: simbadzina commented on code in PR #4447:
URL: https://github.com/apache/hadoop/pull/4447#discussion_r900392911


##
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/JaasConfiguration.java:
##
@@ -0,0 +1,77 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License. See accompanying LICENSE file.
+ */
+package org.apache.hadoop.security.authentication.util;
+
+import java.util.HashMap;
+import java.util.Map;
+import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+
+
+/**
+ * Creates a programmatic version of a jaas.conf file. This can be used
+ * instead of writing a jaas.conf file and setting the system property,
+ * "java.security.auth.login.config", to point to that file. It is meant to be
+ * used for connecting to ZooKeeper.
+ */
+public class JaasConfiguration extends Configuration {
+
+  private final javax.security.auth.login.Configuration baseConfig =
+  javax.security.auth.login.Configuration.getConfiguration();
+  private static AppConfigurationEntry[] entry;

Review Comment:
   Can we make this non-static?





Issue Time Tracking
---

Worklog Id: (was: 782489)
Time Spent: 40m  (was: 0.5h)

> StateStoreZooKeeper fails to initialize
> ---
>
> Key: HDFS-16591
> URL: https://issues.apache.org/jira/browse/HDFS-16591
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> MembershipStore and MountTableStore are failing to initialize, logging the 
> following errors on the Router logs:
> {noformat}
> 2022-05-23 16:43:01,156 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService: 
> Cannot get version for class 
> org.apache.hadoop.hdfs.server.federation.store.MembershipStore
> org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableException:
>  Cached State Store not initialized, MembershipState records not valid
>   at 
> org.apache.hadoop.hdfs.server.federation.store.CachedRecordStore.checkCacheAvailable(CachedRecordStore.java:106)
>   at 
> org.apache.hadoop.hdfs.server.federation.store.CachedRecordStore.getCachedRecords(CachedRecordStore.java:227)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.getStateStoreVersion(RouterHeartbeatService.java:131)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:92)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.periodicInvoke(RouterHeartbeatService.java:159)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748){noformat}
> After investigating, we noticed that ZKDelegationTokenSecretManager normally 
> initializes properties for ZooKeeper clients to connect using SASL/Kerberos. 
> If ZKDelegationTokenSecretManager is replaced with a new SecretManager, the 
> SASL properties don't get configured and any StateStores that connect to 
> ZooKeeper fail with the above error. 
>  A potential way to fix this is by setting the 

[jira] [Work logged] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16635?focusedWorklogId=782485=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782485
 ]

ASF GitHub Bot logged work on HDFS-16635:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 17:39
Start Date: 17/Jun/22 17:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4451:
URL: https://github.com/apache/hadoop/pull/4451#issuecomment-1159100604

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 47s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 21s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4451/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 59s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 0 new + 99 
unchanged - 1 fixed = 99 total (was 100)  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 371m  8s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 490m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4451/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4451 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 239d412500bf 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4948f0eacec61d51a3110d2218ab5860c2777137 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK 

[jira] [Work logged] (HDFS-16591) StateStoreZooKeeper fails to initialize

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16591?focusedWorklogId=782480=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782480
 ]

ASF GitHub Bot logged work on HDFS-16591:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 17:29
Start Date: 17/Jun/22 17:29
Worklog Time Spent: 10m 
  Work Description: simbadzina commented on code in PR #4447:
URL: https://github.com/apache/hadoop/pull/4447#discussion_r900377440


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java:
##
@@ -52,6 +52,7 @@
 import org.apache.hadoop.classification.InterfaceStability.Unstable;

Review Comment:
   There are unused imports between line 28 and 34.





Issue Time Tracking
---

Worklog Id: (was: 782480)
Time Spent: 0.5h  (was: 20m)

> StateStoreZooKeeper fails to initialize
> ---
>
> Key: HDFS-16591
> URL: https://issues.apache.org/jira/browse/HDFS-16591
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> MembershipStore and MountTableStore are failing to initialize, logging the 
> following errors on the Router logs:
> {noformat}
> 2022-05-23 16:43:01,156 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService: 
> Cannot get version for class 
> org.apache.hadoop.hdfs.server.federation.store.MembershipStore
> org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableException:
>  Cached State Store not initialized, MembershipState records not valid
>   at 
> org.apache.hadoop.hdfs.server.federation.store.CachedRecordStore.checkCacheAvailable(CachedRecordStore.java:106)
>   at 
> org.apache.hadoop.hdfs.server.federation.store.CachedRecordStore.getCachedRecords(CachedRecordStore.java:227)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.getStateStoreVersion(RouterHeartbeatService.java:131)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:92)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.periodicInvoke(RouterHeartbeatService.java:159)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748){noformat}
> After investigating, we noticed that ZKDelegationTokenSecretManager normally 
> initializes properties for ZooKeeper clients to connect using SASL/Kerberos. 
> If ZKDelegationTokenSecretManager is replaced with a new SecretManager, the 
> SASL properties don't get configured and any StateStores that connect to 
> ZooKeeper fail with the above error. 
>  A potential way to fix this is by setting the JaasConfiguration (currently 
> done in ZKDelegationTokenSecretManager) as part of the 
> StateStoreZooKeeperImpl initialization method.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-17 Thread Ashutosh Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16633 started by Ashutosh Gupta.
-
> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space from BlockReceiver#close which is initiated by 
> DataXceiver#writeBlock finally. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?focusedWorklogId=782466=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782466
 ]

ASF GitHub Bot logged work on HDFS-16633:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 16:32
Start Date: 17/Jun/22 16:32
Worklog Time Spent: 10m 
  Work Description: ashutoshcipher opened a new pull request, #4452:
URL: https://github.com/apache/hadoop/pull/4452

   ### Description of PR
   Reserved Space For Replicas is not released on some cases
   
   Have found the Reserved Space For Replicas is not released on some cases in 
a Cx Prod cluster. We can fix the issue completely by releasing any remaining 
reserved space from BlockReceiver#close which is initiated by 
DataXceiver#writeBlock finally.
   
   * JIRA: HDFS-16633
   
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   




Issue Time Tracking
---

Worklog Id: (was: 782466)
Remaining Estimate: 0h
Time Spent: 10m

> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Ashutosh Gupta
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space from BlockReceiver#close which is initiated by 
> DataXceiver#writeBlock finally. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16633:
--
Labels: pull-request-available  (was: )

> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space from BlockReceiver#close which is initiated by 
> DataXceiver#writeBlock finally. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?focusedWorklogId=782467=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782467
 ]

ASF GitHub Bot logged work on HDFS-16633:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 16:32
Start Date: 17/Jun/22 16:32
Worklog Time Spent: 10m 
  Work Description: ashutoshcipher commented on PR #4452:
URL: https://github.com/apache/hadoop/pull/4452#issuecomment-1159044707

   @PrabhuJoseph @aajisaka - Please review. Thanks. 




Issue Time Tracking
---

Worklog Id: (was: 782467)
Time Spent: 20m  (was: 10m)

> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space from BlockReceiver#close which is initiated by 
> DataXceiver#writeBlock finally. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?focusedWorklogId=782449=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782449
 ]

ASF GitHub Bot logged work on HDFS-16631:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 15:55
Start Date: 17/Jun/22 15:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4438:
URL: https://github.com/apache/hadoop/pull/4438#issuecomment-1159014765

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  61m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shadedclient  |  21m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 27s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  91m 14s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4438/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4438 |
   | Optional Tests | dupname asflicense unit codespell detsecrets xmllint |
   | uname | Linux 2672a54c0019 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 30bf6edc6e886f5ec0c6bf24e62a0a5bce4e838a |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4438/3/testReport/ |
   | Max. process+thread count | 550 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4438/3/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 782449)
Time Spent: 2h 10m  (was: 2h)

> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
> deadlock, this is a very meaningful discussion, I was reading the log and 
> found the following:
> {code:java}
> 2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
> datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
>  not open lock leak check func.{code}
> Looking at the code, I found that there is such a parameter:
> {code:java}
> 
>     dfs.datanode.lockmanager.trace
>     false
>     
>       If this is true, after shut down datanode lock Manager will print all 
> leak
>       thread that not release by lock Manager. Only used for test or trace 
> dead lock
>       problem. In produce default set false, because it's have little 
> performance loss.
>     
>    {code}
> I think this parameter should be added in the test environment, so that if 
> there is a DN deadlock, the cause can be quickly located.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: 

[jira] [Work logged] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?focusedWorklogId=782434=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782434
 ]

ASF GitHub Bot logged work on HDFS-16631:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 15:29
Start Date: 17/Jun/22 15:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4438:
URL: https://github.com/apache/hadoop/pull/4438#issuecomment-1158989376

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  62m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shadedclient  |  21m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 26s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  92m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4438/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4438 |
   | Optional Tests | dupname asflicense unit codespell detsecrets xmllint |
   | uname | Linux e09f249c6203 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 438879f0d576fdb1e7f823b592daf3cfa0215d2a |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4438/2/testReport/ |
   | Max. process+thread count | 522 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4438/2/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 782434)
Time Spent: 2h  (was: 1h 50m)

> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
> deadlock, this is a very meaningful discussion, I was reading the log and 
> found the following:
> {code:java}
> 2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
> datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
>  not open lock leak check func.{code}
> Looking at the code, I found that there is such a parameter:
> {code:java}
> 
>     dfs.datanode.lockmanager.trace
>     false
>     
>       If this is true, after shut down datanode lock Manager will print all 
> leak
>       thread that not release by lock Manager. Only used for test or trace 
> dead lock
>       problem. In produce default set false, because it's have little 
> performance loss.
>     
>    {code}
> I think this parameter should be added in the test environment, so that if 
> there is a DN deadlock, the cause can be quickly located.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-16600) Fix deadlock of fine-grain lock for FsDatastImpl of DataNode.

2022-06-17 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17555638#comment-17555638
 ] 

Xiaoqiao He commented on HDFS-16600:


[~xuzq_zander] BTW, do you deploy this feature on your prod cluster? Would you 
mind offer some performance result versus without this feature if so? Although 
it is deployed on my internal cluster for over year and it works well, I 
believe it could show different performance result more or less for different 
version (my internal version is based branch-2.7). Thanks again.

> Fix deadlock of fine-grain lock for FsDatastImpl of DataNode.
> -
>
> Key: HDFS-16600
> URL: https://issues.apache.org/jira/browse/HDFS-16600
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> The UT 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.testSynchronousEviction 
> failed, because happened deadlock, which  is introduced by 
> [HDFS-16534|https://issues.apache.org/jira/browse/HDFS-16534]. 
> DeadLock:
> {code:java}
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.createRbw line 1588 
> need a read lock
> try (AutoCloseableLock lock = lockManager.readLock(LockLevel.BLOCK_POOl,
> b.getBlockPoolId()))
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.evictBlocks line 
> 3526 need a write lock
> try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl, 
> bpid))
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16600) Fix deadlock of fine-grain lock for FsDatastImpl of DataNode.

2022-06-17 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-16600.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk. Thanks [~xuzq_zander] for your works!

> Fix deadlock of fine-grain lock for FsDatastImpl of DataNode.
> -
>
> Key: HDFS-16600
> URL: https://issues.apache.org/jira/browse/HDFS-16600
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> The UT 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.testSynchronousEviction 
> failed, because happened deadlock, which  is introduced by 
> [HDFS-16534|https://issues.apache.org/jira/browse/HDFS-16534]. 
> DeadLock:
> {code:java}
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.createRbw line 1588 
> need a read lock
> try (AutoCloseableLock lock = lockManager.readLock(LockLevel.BLOCK_POOl,
> b.getBlockPoolId()))
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.evictBlocks line 
> 3526 need a write lock
> try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl, 
> bpid))
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16600) Fix deadlock of fine-grain lock for FsDatastImpl of DataNode.

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16600?focusedWorklogId=782414=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782414
 ]

ASF GitHub Bot logged work on HDFS-16600:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 14:07
Start Date: 17/Jun/22 14:07
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on PR #4367:
URL: https://github.com/apache/hadoop/pull/4367#issuecomment-1158908156

   The latest build looks good to me. Committed to trunk.
   Thanks @ZanderXu for your report and contributions! Thanks @ayushtkn / 
@MingXiangLi / @slfan1989 for your warm discussions and helpful suggestions!




Issue Time Tracking
---

Worklog Id: (was: 782414)
Time Spent: 5h  (was: 4h 50m)

> Fix deadlock of fine-grain lock for FsDatastImpl of DataNode.
> -
>
> Key: HDFS-16600
> URL: https://issues.apache.org/jira/browse/HDFS-16600
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> The UT 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.testSynchronousEviction 
> failed, because happened deadlock, which  is introduced by 
> [HDFS-16534|https://issues.apache.org/jira/browse/HDFS-16534]. 
> DeadLock:
> {code:java}
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.createRbw line 1588 
> need a read lock
> try (AutoCloseableLock lock = lockManager.readLock(LockLevel.BLOCK_POOl,
> b.getBlockPoolId()))
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.evictBlocks line 
> 3526 need a write lock
> try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl, 
> bpid))
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16600) Fix deadlock of fine-grain lock for FsDatastImpl of DataNode.

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16600?focusedWorklogId=782412=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782412
 ]

ASF GitHub Bot logged work on HDFS-16600:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 14:05
Start Date: 17/Jun/22 14:05
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao merged PR #4367:
URL: https://github.com/apache/hadoop/pull/4367




Issue Time Tracking
---

Worklog Id: (was: 782412)
Time Spent: 4h 50m  (was: 4h 40m)

> Fix deadlock of fine-grain lock for FsDatastImpl of DataNode.
> -
>
> Key: HDFS-16600
> URL: https://issues.apache.org/jira/browse/HDFS-16600
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> The UT 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.testSynchronousEviction 
> failed, because happened deadlock, which  is introduced by 
> [HDFS-16534|https://issues.apache.org/jira/browse/HDFS-16534]. 
> DeadLock:
> {code:java}
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.createRbw line 1588 
> need a read lock
> try (AutoCloseableLock lock = lockManager.readLock(LockLevel.BLOCK_POOl,
> b.getBlockPoolId()))
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.evictBlocks line 
> 3526 need a write lock
> try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl, 
> bpid))
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16600) Fix deadlock of fine-grain lock for FsDatastImpl of DataNode.

2022-06-17 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-16600:
---
Summary: Fix deadlock of fine-grain lock for FsDatastImpl of DataNode.  
(was: Deadlock on DataNode)

> Fix deadlock of fine-grain lock for FsDatastImpl of DataNode.
> -
>
> Key: HDFS-16600
> URL: https://issues.apache.org/jira/browse/HDFS-16600
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> The UT 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.testSynchronousEviction 
> failed, because happened deadlock, which  is introduced by 
> [HDFS-16534|https://issues.apache.org/jira/browse/HDFS-16534]. 
> DeadLock:
> {code:java}
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.createRbw line 1588 
> need a read lock
> try (AutoCloseableLock lock = lockManager.readLock(LockLevel.BLOCK_POOl,
> b.getBlockPoolId()))
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.evictBlocks line 
> 3526 need a write lock
> try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl, 
> bpid))
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16600) Deadlock on DataNode

2022-06-17 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-16600:
---
Parent: HDFS-15382
Issue Type: Sub-task  (was: Bug)

> Deadlock on DataNode
> 
>
> Key: HDFS-16600
> URL: https://issues.apache.org/jira/browse/HDFS-16600
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> The UT 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.testSynchronousEviction 
> failed, because happened deadlock, which  is introduced by 
> [HDFS-16534|https://issues.apache.org/jira/browse/HDFS-16534]. 
> DeadLock:
> {code:java}
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.createRbw line 1588 
> need a read lock
> try (AutoCloseableLock lock = lockManager.readLock(LockLevel.BLOCK_POOl,
> b.getBlockPoolId()))
> // org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.evictBlocks line 
> 3526 need a write lock
> try (AutoCloseableLock lock = lockManager.writeLock(LockLevel.BLOCK_POOl, 
> bpid))
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-17 Thread Ashutosh Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Gupta reassigned HDFS-16633:
-

Assignee: Ashutosh Gupta  (was: Prabhu Joseph)

> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Ashutosh Gupta
>Priority: Major
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space from BlockReceiver#close which is initiated by 
> DataXceiver#writeBlock finally. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16635?focusedWorklogId=782392=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782392
 ]

ASF GitHub Bot logged work on HDFS-16635:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 13:08
Start Date: 17/Jun/22 13:08
Worklog Time Spent: 10m 
  Work Description: ashutoshcipher commented on PR #4451:
URL: https://github.com/apache/hadoop/pull/4451#issuecomment-1158854703

   > @ashutoshcipher This seems to coincide with the pr I submitted(#4423), I 
will close my pr and wait for you to fix it, thanks!According to the test 
results, repairing this can prevent the java doc from reporting errors.
   
   Thanks @slfan1989 




Issue Time Tracking
---

Worklog Id: (was: 782392)
Time Spent: 0.5h  (was: 20m)

> Fix javadoc error in Java 11
> 
>
> Key: HDFS-16635
> URL: https://issues.apache.org/jira/browse/HDFS-16635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Javadoc build in Java 11 fails.
> {noformat}
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
>  error: reference not found
> [ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
> startup
> {noformat}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16619) Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16619?focusedWorklogId=782390=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782390
 ]

ASF GitHub Bot logged work on HDFS-16619:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 12:54
Start Date: 17/Jun/22 12:54
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4406:
URL: https://github.com/apache/hadoop/pull/4406#issuecomment-1158842875

   @ayushtkn Please help me to review the code, I hope to replace the 
deprecated import with the recommended .




Issue Time Tracking
---

Worklog Id: (was: 782390)
Time Spent: 1h 40m  (was: 1.5h)

> Fix HttpHeaders.Values And HttpHeaders.Names Deprecated Import.
> ---
>
> Key: HDFS-16619
> URL: https://issues.apache.org/jira/browse/HDFS-16619
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: Fix HttpHeaders.Values And HttpHeaders.Names 
> Deprecated.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> HttpHeaders.Values ​​and HttpHeaders.Names are deprecated, use 
> HttpHeaderValues ​​and HttpHeaderNames instead.
> HttpHeaders.Names
> Deprecated. 
> Use HttpHeaderNames instead. Standard HTTP header names.
> {code:java}
> /** @deprecated */
> @Deprecated
> public static final class Names {
>   public static final String ACCEPT = "Accept";
>   public static final String ACCEPT_CHARSET = "Accept-Charset";
>   public static final String ACCEPT_ENCODING = "Accept-Encoding";
>   public static final String ACCEPT_LANGUAGE = "Accept-Language";
>   public static final String ACCEPT_RANGES = "Accept-Ranges";
>   public static final String ACCEPT_PATCH = "Accept-Patch";
>   public static final String ACCESS_CONTROL_ALLOW_CREDENTIALS = 
> "Access-Control-Allow-Credentials";
>   public static final String ACCESS_CONTROL_ALLOW_HEADERS = 
> "Access-Control-Allow-Headers"; {code}
> HttpHeaders.Values
> Deprecated. 
> Use HttpHeaderValues instead. Standard HTTP header values.
> {code:java}
> /** @deprecated */
> @Deprecated
> public static final class Values {
>   public static final String APPLICATION_JSON = "application/json";
>   public static final String APPLICATION_X_WWW_FORM_URLENCODED = 
> "application/x-www-form-urlencoded";
>   public static final String BASE64 = "base64";
>   public static final String BINARY = "binary";
>   public static final String BOUNDARY = "boundary";
>   public static final String BYTES = "bytes";
>   public static final String CHARSET = "charset";
>   public static final String CHUNKED = "chunked";
>   public static final String CLOSE = "close"; {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16629) [JDK 11] Fix javadoc warnings in hadoop-hdfs module

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16629?focusedWorklogId=782389=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782389
 ]

ASF GitHub Bot logged work on HDFS-16629:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 12:50
Start Date: 17/Jun/22 12:50
Worklog Time Spent: 10m 
  Work Description: slfan1989 closed pull request #4423: HDFS-16629. [JDK 
11] Fix javadoc  warnings in hadoop-hdfs module.
URL: https://github.com/apache/hadoop/pull/4423




Issue Time Tracking
---

Worklog Id: (was: 782389)
Time Spent: 0.5h  (was: 20m)

> [JDK 11] Fix javadoc  warnings in hadoop-hdfs module
> 
>
> Key: HDFS-16629
> URL: https://issues.apache.org/jira/browse/HDFS-16629
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0, 3.3.4
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> During compilation of the most recently committed code, a java doc waring 
> appeared and I will fix it.
> {code:java}
> 1 error
> 100 warnings
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  37.132 s
> [INFO] Finished at: 2022-06-09T17:07:12Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
> (default-cli) on project hadoop-hdfs: An error has occurred in Javadoc report 
> generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16629) [JDK 11] Fix javadoc warnings in hadoop-hdfs module

2022-06-17 Thread fanshilun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanshilun resolved HDFS-16629.
--
Resolution: Fixed

> [JDK 11] Fix javadoc  warnings in hadoop-hdfs module
> 
>
> Key: HDFS-16629
> URL: https://issues.apache.org/jira/browse/HDFS-16629
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0, 3.3.4
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During compilation of the most recently committed code, a java doc waring 
> appeared and I will fix it.
> {code:java}
> 1 error
> 100 warnings
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  37.132 s
> [INFO] Finished at: 2022-06-09T17:07:12Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc-no-fork 
> (default-cli) on project hadoop-hdfs: An error has occurred in Javadoc report 
> generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?focusedWorklogId=782386=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782386
 ]

ASF GitHub Bot logged work on HDFS-16631:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 12:40
Start Date: 17/Jun/22 12:40
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4438:
URL: https://github.com/apache/hadoop/pull/4438#issuecomment-1158832553

   > From my side, I do not think enable lock trace only is good idea for tests 
as @MingXiangLi has mentioned above. The only INFO level log will not help to 
debug or test. IMO, if there are some cases we would like to cover and need to 
collect locks information, it is better to add some inject logic. FYI.
   
   Thank you very much for your suggestion, I will think about how to collect 
lock information!




Issue Time Tracking
---

Worklog Id: (was: 782386)
Time Spent: 1h 50m  (was: 1h 40m)

> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
> deadlock, this is a very meaningful discussion, I was reading the log and 
> found the following:
> {code:java}
> 2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
> datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
>  not open lock leak check func.{code}
> Looking at the code, I found that there is such a parameter:
> {code:java}
> 
>     dfs.datanode.lockmanager.trace
>     false
>     
>       If this is true, after shut down datanode lock Manager will print all 
> leak
>       thread that not release by lock Manager. Only used for test or trace 
> dead lock
>       problem. In produce default set false, because it's have little 
> performance loss.
>     
>    {code}
> I think this parameter should be added in the test environment, so that if 
> there is a DN deadlock, the cause can be quickly located.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16635?focusedWorklogId=782380=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782380
 ]

ASF GitHub Bot logged work on HDFS-16635:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 12:25
Start Date: 17/Jun/22 12:25
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4451:
URL: https://github.com/apache/hadoop/pull/4451#issuecomment-1158820178

   @ashutoshcipher This seems to coincide with the pr I 
submitted(https://github.com/apache/hadoop/pull/4423), The modification of java 
doc needs to fix a lot of warning parts, if you have time, you can help to 
complete HDFS-16629.




Issue Time Tracking
---

Worklog Id: (was: 782380)
Time Spent: 20m  (was: 10m)

> Fix javadoc error in Java 11
> 
>
> Key: HDFS-16635
> URL: https://issues.apache.org/jira/browse/HDFS-16635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Javadoc build in Java 11 fails.
> {noformat}
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
>  error: reference not found
> [ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
> startup
> {noformat}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16636) Put Chinese characters File Name Cause http header error

2022-06-17 Thread lidayu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lidayu updated HDFS-16636:
--
Description: 
*When We Put a file,the filename have Chinese characters,like this:*

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"     
                                                       
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: 
hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
 Path=/; HttpOnly
Location: 
[http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
Content-Type: application/octet-stream
Content-Length: 0

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
=true=false"
HTTP/1.1 100 Continue

HTTP/1.1 201 Created
*Location: hdfs://9.135.15.26:9000/*
*82.png*    ->(this cause problem,the line have no colon)
Content-Length: 0
Connection: close

 

 

*THE problem is:the location will A newline,normal ,the will a one line ,like 
this:*

*Location: hdfs://9.135.15.26:9000/上游2.png    ,*

*it will cause knox error:validate header,becuase the line have no ":",*

*!image-2022-06-17-17-34-43-294.png|width=615,height=393!*

 

  was:
*When We Put a file,the filename have Chinese characters,like this:*

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"     
                                                       
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: 
hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
 Path=/; HttpOnly
Location: 
[http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
Content-Type: application/octet-stream
Content-Length: 0

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
=true=false"
HTTP/1.1 100 Continue

HTTP/1.1 201 Created
*Location: hdfs://9.135.15.26:9000/*
*82.png*    
Content-Length: 0
Connection: close

 

 

*THE problem is:the location will A newline,normal ,the will a one line ,like 
this:*

*Location: hdfs://9.135.15.26:9000/上游2.png    ,*

*it will cause knox error:validate header,becuase the line have no ":",*

*!image-2022-06-17-17-34-43-294.png|width=615,height=393!*

 


> Put Chinese characters File Name Cause http header error
> 
>
> Key: HDFS-16636
> URL: https://issues.apache.org/jira/browse/HDFS-16636
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lidayu
>Priority: Major
> Attachments: image-2022-06-17-17-27-17-052.png, 
> image-2022-06-17-17-34-43-294.png
>
>
> *When We Put a file,the filename have Chinese characters,like this:*
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"   
>                                                          
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> X-FRAME-OPTIONS: SAMEORIGIN
> Set-Cookie: 
> hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
>  Path=/; HttpOnly
> Location: 
> [http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
> Content-Type: application/octet-stream
> Content-Length: 0
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
> =true=false"
> HTTP/1.1 100 Continue
> HTTP/1.1 201 Created
> *Location: hdfs://9.135.15.26:9000/*
> *82.png*    ->(this cause problem,the line have no colon)
> 

[jira] [Work logged] (HDFS-16631) Enable dfs.datanode.lockmanager.trace In Test

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16631?focusedWorklogId=782350=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782350
 ]

ASF GitHub Bot logged work on HDFS-16631:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 09:38
Start Date: 17/Jun/22 09:38
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on PR #4438:
URL: https://github.com/apache/hadoop/pull/4438#issuecomment-1158695959

   From my side, I do not think enable lock trace only is good idea for tests 
as @MingXiangLi has mentioned above. The only INFO level log will not help to 
debug or test. IMO, if there are some cases we would like to cover and need to 
collect locks information, it is better to add some inject logic. FYI.




Issue Time Tracking
---

Worklog Id: (was: 782350)
Time Spent: 1h 40m  (was: 1.5h)

> Enable dfs.datanode.lockmanager.trace In Test
> -
>
> Key: HDFS-16631
> URL: https://issues.apache.org/jira/browse/HDFS-16631
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: fanshilun
>Assignee: fanshilun
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In Jira HDFS-16600. Fix deadlock on DataNode side. We discussed the issue of 
> deadlock, this is a very meaningful discussion, I was reading the log and 
> found the following:
> {code:java}
> 2022-05-27 07:39:47,890 [Listener at localhost/36941] WARN 
> datanode.DataSetLockManager (DataSetLockManager.java:lockLeakCheck(261)) -
>  not open lock leak check func.{code}
> Looking at the code, I found that there is such a parameter:
> {code:java}
> 
>     dfs.datanode.lockmanager.trace
>     false
>     
>       If this is true, after shut down datanode lock Manager will print all 
> leak
>       thread that not release by lock Manager. Only used for test or trace 
> dead lock
>       problem. In produce default set false, because it's have little 
> performance loss.
>     
>    {code}
> I think this parameter should be added in the test environment, so that if 
> there is a DN deadlock, the cause can be quickly located.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16636) Put Chinese characters File Name Cause http header error

2022-06-17 Thread lidayu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lidayu updated HDFS-16636:
--
Description: 
*When We Put a file,the filename have Chinese characters,like this:*

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"     
                                                       
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: 
hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
 Path=/; HttpOnly
Location: 
[http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
Content-Type: application/octet-stream
Content-Length: 0

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
=true=false"
HTTP/1.1 100 Continue

HTTP/1.1 201 Created
*Location: hdfs://9.135.15.26:9000/*
*82.png*    
Content-Length: 0
Connection: close

 

 

*THE problem is:the location will A newline,normal ,the will a one line ,like 
this:*

*Location: hdfs://9.135.15.26:9000/上游2.png    ,*

*it will cause knox error:validate header,becuase the line have no ":",*

*!image-2022-06-17-17-34-43-294.png|width=615,height=393!*

 

  was:
*When We Put a file,the filename have Chinese characters,like this:*

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"la   
                                                            
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: 
hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
 Path=/; HttpOnly
Location: 
[http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
Content-Type: application/octet-stream
Content-Length: 0

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
=true=false"
HTTP/1.1 100 Continue

HTTP/1.1 201 Created
*Location: hdfs://9.135.15.26:9000/*
*82.png*    
Content-Length: 0
Connection: close

 

 

*THE problem is:the location will A newline,normal ,the will a one line ,like 
this:*

*Location: hdfs://9.135.15.26:9000/上游2.png    ,*

*it will cause knox error:validate header,becuase the line have no ":",*

*!image-2022-06-17-17-34-43-294.png|width=615,height=393!*

 


> Put Chinese characters File Name Cause http header error
> 
>
> Key: HDFS-16636
> URL: https://issues.apache.org/jira/browse/HDFS-16636
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lidayu
>Priority: Major
> Attachments: image-2022-06-17-17-27-17-052.png, 
> image-2022-06-17-17-34-43-294.png
>
>
> *When We Put a file,the filename have Chinese characters,like this:*
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"   
>                                                          
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> X-FRAME-OPTIONS: SAMEORIGIN
> Set-Cookie: 
> hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
>  Path=/; HttpOnly
> Location: 
> [http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
> Content-Type: application/octet-stream
> Content-Length: 0
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
> =true=false"
> HTTP/1.1 100 Continue
> HTTP/1.1 201 Created
> *Location: hdfs://9.135.15.26:9000/*
> *82.png*    
> Content-Length: 0
> Connection: close
>  
>  
> *THE problem is:the location will A newline,normal 

[jira] [Updated] (HDFS-16636) Put Chinese characters File Name Cause http header error

2022-06-17 Thread lidayu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lidayu updated HDFS-16636:
--
Summary: Put Chinese characters File Name Cause http header error  (was: 
Put Chinese Cause http header error,)

> Put Chinese characters File Name Cause http header error
> 
>
> Key: HDFS-16636
> URL: https://issues.apache.org/jira/browse/HDFS-16636
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lidayu
>Priority: Major
> Attachments: image-2022-06-17-17-27-17-052.png, 
> image-2022-06-17-17-34-43-294.png
>
>
> *When We Put a file,the filename have Chinese characters,like this:*
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"la 
>                                                               
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> X-FRAME-OPTIONS: SAMEORIGIN
> Set-Cookie: 
> hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
>  Path=/; HttpOnly
> Location: 
> [http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
> Content-Type: application/octet-stream
> Content-Length: 0
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
> =true=false"
> HTTP/1.1 100 Continue
> HTTP/1.1 201 Created
> *Location: hdfs://9.135.15.26:9000/*
> *82.png*    
> Content-Length: 0
> Connection: close
>  
>  
> *THE problem is:the location will A newline,normal ,the will a one line ,like 
> this:*
> *Location: hdfs://9.135.15.26:9000/上游2.png    ,*
> *it will cause knox error:validate header,becuase the line have no ":",*
> *!image-2022-06-17-17-34-43-294.png|width=615,height=393!*
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16636) Put Chinese Cause http header error,

2022-06-17 Thread lidayu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lidayu updated HDFS-16636:
--
Attachment: image-2022-06-17-17-34-43-294.png

> Put Chinese Cause http header error,
> 
>
> Key: HDFS-16636
> URL: https://issues.apache.org/jira/browse/HDFS-16636
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lidayu
>Priority: Major
> Attachments: image-2022-06-17-17-27-17-052.png, 
> image-2022-06-17-17-34-43-294.png
>
>
> *When We Put a file,the filename have Chinese characters,like this:*
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"la 
>                                                               
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> X-FRAME-OPTIONS: SAMEORIGIN
> Set-Cookie: 
> hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
>  Path=/; HttpOnly
> Location: 
> http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false
> Content-Type: application/octet-stream
> Content-Length: 0
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
> =true=false"
> HTTP/1.1 100 Continue
> HTTP/1.1 201 Created
> *Location: hdfs://9.135.15.26:9000/*
> *82.png*    
> Content-Length: 0
> Connection: close
>  
>  
> *THE problem is:the location will A newline,normal ,the will a one line ,like 
> this:*
> *Location: hdfs://9.135.15.26:9000/上游2.png    ,*
> *it will cause knox error:validate header*
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16636) Put Chinese Cause http header error,

2022-06-17 Thread lidayu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lidayu updated HDFS-16636:
--
Description: 
*When We Put a file,the filename have Chinese characters,like this:*

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"la   
                                                            
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: 
hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
 Path=/; HttpOnly
Location: 
[http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
Content-Type: application/octet-stream
Content-Length: 0

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
=true=false"
HTTP/1.1 100 Continue

HTTP/1.1 201 Created
*Location: hdfs://9.135.15.26:9000/*
*82.png*    
Content-Length: 0
Connection: close

 

 

*THE problem is:the location will A newline,normal ,the will a one line ,like 
this:*

*Location: hdfs://9.135.15.26:9000/上游2.png    ,*

*it will cause knox error:validate header,becuase the line have no ":",*

*!image-2022-06-17-17-34-43-294.png|width=615,height=393!*

 

  was:
*When We Put a file,the filename have Chinese characters,like this:*

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"la   
                                                            
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: 
hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
 Path=/; HttpOnly
Location: 
http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false
Content-Type: application/octet-stream
Content-Length: 0

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
=true=false"
HTTP/1.1 100 Continue

HTTP/1.1 201 Created
*Location: hdfs://9.135.15.26:9000/*
*82.png*    
Content-Length: 0
Connection: close

 

 

*THE problem is:the location will A newline,normal ,the will a one line ,like 
this:*

*Location: hdfs://9.135.15.26:9000/上游2.png    ,*

*it will cause knox error:validate header*

 


> Put Chinese Cause http header error,
> 
>
> Key: HDFS-16636
> URL: https://issues.apache.org/jira/browse/HDFS-16636
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lidayu
>Priority: Major
> Attachments: image-2022-06-17-17-27-17-052.png, 
> image-2022-06-17-17-34-43-294.png
>
>
> *When We Put a file,the filename have Chinese characters,like this:*
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"la 
>                                                               
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> X-FRAME-OPTIONS: SAMEORIGIN
> Set-Cookie: 
> hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
>  Path=/; HttpOnly
> Location: 
> [http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false|http://vm-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false]
> Content-Type: application/octet-stream
> Content-Length: 0
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
> =true=false"
> HTTP/1.1 100 Continue
> HTTP/1.1 201 Created
> *Location: hdfs://9.135.15.26:9000/*
> *82.png*    
> Content-Length: 0
> Connection: close
>  
>  
> *THE problem is:the location will A newline,normal ,the will a one line ,like 
> this:*
> *Location: hdfs://9.135.15.26:9000/上游2.png    ,*
> *it will cause knox error:validate header,becuase the line have no ":",*
> *!image-2022-06-17-17-34-43-294.png|width=615,height=393!*
>  



--

[jira] [Updated] (HDFS-16636) Put Chinese Cause http header error,

2022-06-17 Thread lidayu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lidayu updated HDFS-16636:
--
Description: 
*When We Put a file,the filename have Chinese characters,like this:*

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"la   
                                                            
HTTP/1.1 307 TEMPORARY_REDIRECT
Cache-Control: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
Expires: Fri, 17 Jun 2022 09:26:09 GMT
Date: Fri, 17 Jun 2022 09:26:09 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: 
hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
 Path=/; HttpOnly
Location: 
http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false
Content-Type: application/octet-stream
Content-Length: 0

[hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
"http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
=true=false"
HTTP/1.1 100 Continue

HTTP/1.1 201 Created
*Location: hdfs://9.135.15.26:9000/*
*82.png*    
Content-Length: 0
Connection: close

 

 

*THE problem is:the location will A newline,normal ,the will a one line ,like 
this:*

*Location: hdfs://9.135.15.26:9000/上游2.png    ,*

*it will cause knox error:validate header*

 

  was:
When We Put a file,the filename have Chinese characters,like this:

!image-2022-06-17-17-27-17-052.png!

 


> Put Chinese Cause http header error,
> 
>
> Key: HDFS-16636
> URL: https://issues.apache.org/jira/browse/HDFS-16636
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lidayu
>Priority: Major
> Attachments: image-2022-06-17-17-27-17-052.png
>
>
> *When We Put a file,the filename have Chinese characters,like this:*
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://9.135.15.26:50070/webhdfs/v1/上游2.png?op=CREATE=hdfsadmin"la 
>                                                               
> HTTP/1.1 307 TEMPORARY_REDIRECT
> Cache-Control: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> Expires: Fri, 17 Jun 2022 09:26:09 GMT
> Date: Fri, 17 Jun 2022 09:26:09 GMT
> Pragma: no-cache
> X-FRAME-OPTIONS: SAMEORIGIN
> Set-Cookie: 
> hadoop.auth="u=hdfsadmin=hdfsadmin=simple=1655493969462=YvkwSAwETWr2BqfkRTvBHy0Yj2A=";
>  Path=/; HttpOnly
> Location: 
> http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000==true=false
> Content-Type: application/octet-stream
> Content-Length: 0
> [hdfsadmin@VM-15-26-centos ~]$ curl -i -X PUT 
> "http://VM-15-26-centos:50075/webhdfs/v1/%E4%B8%8A%E6%B8%B82.png?op=CREATE=hdfsadmin=9.135.15.26:9000=
> =true=false"
> HTTP/1.1 100 Continue
> HTTP/1.1 201 Created
> *Location: hdfs://9.135.15.26:9000/*
> *82.png*    
> Content-Length: 0
> Connection: close
>  
>  
> *THE problem is:the location will A newline,normal ,the will a one line ,like 
> this:*
> *Location: hdfs://9.135.15.26:9000/上游2.png    ,*
> *it will cause knox error:validate header*
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16634) Dynamically adjust slow peer report size on JMX metrics

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16634?focusedWorklogId=782345=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782345
 ]

ASF GitHub Bot logged work on HDFS-16634:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 09:31
Start Date: 17/Jun/22 09:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4448:
URL: https://github.com/apache/hadoop/pull/4448#issuecomment-1158690095

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 50s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 21s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 58s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 377m 10s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 496m 47s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4448/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4448 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b57abcc008d2 

[jira] [Work logged] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16635?focusedWorklogId=782340=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782340
 ]

ASF GitHub Bot logged work on HDFS-16635:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 09:27
Start Date: 17/Jun/22 09:27
Worklog Time Spent: 10m 
  Work Description: ashutoshcipher opened a new pull request, #4451:
URL: https://github.com/apache/hadoop/pull/4451

   ### Description of PR
   Fixed javadoc error in Java 11
   
   * JIRA: HDFS-16635
   
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   




Issue Time Tracking
---

Worklog Id: (was: 782340)
Remaining Estimate: 0h
Time Spent: 10m

> Fix javadoc error in Java 11
> 
>
> Key: HDFS-16635
> URL: https://issues.apache.org/jira/browse/HDFS-16635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: newbie
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Javadoc build in Java 11 fails.
> {noformat}
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
>  error: reference not found
> [ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
> startup
> {noformat}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16635:
--
Labels: newbie pull-request-available  (was: newbie)

> Fix javadoc error in Java 11
> 
>
> Key: HDFS-16635
> URL: https://issues.apache.org/jira/browse/HDFS-16635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Javadoc build in Java 11 fails.
> {noformat}
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
>  error: reference not found
> [ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
> startup
> {noformat}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16636) Put Chinese Cause http header error,

2022-06-17 Thread lidayu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lidayu updated HDFS-16636:
--
Description: 
When We Put a file,the filename have Chinese characters,like this:

!image-2022-06-17-17-27-17-052.png!

 

> Put Chinese Cause http header error,
> 
>
> Key: HDFS-16636
> URL: https://issues.apache.org/jira/browse/HDFS-16636
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lidayu
>Priority: Major
> Attachments: image-2022-06-17-17-27-17-052.png
>
>
> When We Put a file,the filename have Chinese characters,like this:
> !image-2022-06-17-17-27-17-052.png!
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16636) Put Chinese Cause http header error,

2022-06-17 Thread lidayu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lidayu updated HDFS-16636:
--
Attachment: image-2022-06-17-17-27-17-052.png

> Put Chinese Cause http header error,
> 
>
> Key: HDFS-16636
> URL: https://issues.apache.org/jira/browse/HDFS-16636
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lidayu
>Priority: Major
> Attachments: image-2022-06-17-17-27-17-052.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread Ashutosh Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16635 started by Ashutosh Gupta.
-
> Fix javadoc error in Java 11
> 
>
> Key: HDFS-16635
> URL: https://issues.apache.org/jira/browse/HDFS-16635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: newbie
>
> Javadoc build in Java 11 fails.
> {noformat}
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
>  error: reference not found
> [ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
> startup
> {noformat}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread Ashutosh Gupta (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Gupta reassigned HDFS-16635:
-

Assignee: Ashutosh Gupta

> Fix javadoc error in Java 11
> 
>
> Key: HDFS-16635
> URL: https://issues.apache.org/jira/browse/HDFS-16635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: newbie
>
> Javadoc build in Java 11 fails.
> {noformat}
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
>  error: reference not found
> [ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
> startup
> {noformat}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16636) Put Chinese Cause http header error,

2022-06-17 Thread lidayu (Jira)
lidayu created HDFS-16636:
-

 Summary: Put Chinese Cause http header error,
 Key: HDFS-16636
 URL: https://issues.apache.org/jira/browse/HDFS-16636
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: lidayu






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16064) HDFS-721 causes DataNode decommissioning to get stuck indefinitely

2022-06-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16064?focusedWorklogId=782250=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782250
 ]

ASF GitHub Bot logged work on HDFS-16064:
-

Author: ASF GitHub Bot
Created on: 17/Jun/22 06:25
Start Date: 17/Jun/22 06:25
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on PR #4410:
URL: https://github.com/apache/hadoop/pull/4410#issuecomment-1158535982

   Filed HDFS-16635 to fix javadoc error.




Issue Time Tracking
---

Worklog Id: (was: 782250)
Time Spent: 1.5h  (was: 1h 20m)

> HDFS-721 causes DataNode decommissioning to get stuck indefinitely
> --
>
> Key: HDFS-16064
> URL: https://issues.apache.org/jira/browse/HDFS-16064
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.2.1
>Reporter: Kevin Wikant
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Seems that https://issues.apache.org/jira/browse/HDFS-721 was resolved as a 
> non-issue under the assumption that if the namenode & a datanode get into an 
> inconsistent state for a given block pipeline, there should be another 
> datanode available to replicate the block to
> While testing datanode decommissioning using "dfs.exclude.hosts", I have 
> encountered a scenario where the decommissioning gets stuck indefinitely
> Below is the progression of events:
>  * there are initially 4 datanodes DN1, DN2, DN3, DN4
>  * scale-down is started by adding DN1 & DN2 to "dfs.exclude.hosts"
>  * HDFS block pipelines on DN1 & DN2 must now be replicated to DN3 & DN4 in 
> order to satisfy their minimum replication factor of 2
>  * during this replication process 
> https://issues.apache.org/jira/browse/HDFS-721 is encountered which causes 
> the following inconsistent state:
>  ** DN3 thinks it has the block pipeline in FINALIZED state
>  ** the namenode does not think DN3 has the block pipeline
> {code:java}
> 2021-06-06 10:38:23,604 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
> (DataXceiver for client  at /DN2:45654 [Receiving block BP-YYY:blk_XXX]): 
> DN3:9866:DataXceiver error processing WRITE_BLOCK operation  src: /DN2:45654 
> dst: /DN3:9866; 
> org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Block 
> BP-YYY:blk_XXX already exists in state FINALIZED and thus cannot be created.
> {code}
>  * the replication is attempted again, but:
>  ** DN4 has the block
>  ** DN1 and/or DN2 have the block, but don't count towards the minimum 
> replication factor because they are being decommissioned
>  ** DN3 does not have the block & cannot have the block replicated to it 
> because of HDFS-721
>  * the namenode repeatedly tries to replicate the block to DN3 & repeatedly 
> fails, this continues indefinitely
>  * therefore DN4 is the only live datanode with the block & the minimum 
> replication factor of 2 cannot be satisfied
>  * because the minimum replication factor cannot be satisfied for the 
> block(s) being moved off DN1 & DN2, the datanode decommissioning can never be 
> completed 
> {code:java}
> 2021-06-06 10:39:10,106 INFO BlockStateChange (DatanodeAdminMonitor-0): 
> Block: blk_XXX, Expected Replicas: 2, live replicas: 1, corrupt replicas: 0, 
> decommissioned replicas: 0, decommissioning replicas: 2, maintenance 
> replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is 
> Open File: false, Datanodes having this block: DN1:9866 DN2:9866 DN4:9866 , 
> Current Datanode: DN1:9866, Is current datanode decommissioning: true, Is 
> current datanode entering maintenance: false
> ...
> 2021-06-06 10:57:10,105 INFO BlockStateChange (DatanodeAdminMonitor-0): 
> Block: blk_XXX, Expected Replicas: 2, live replicas: 1, corrupt replicas: 0, 
> decommissioned replicas: 0, decommissioning replicas: 2, maintenance 
> replicas: 0, live entering maintenance replicas: 0, excess replicas: 0, Is 
> Open File: false, Datanodes having this block: DN1:9866 DN2:9866 DN4:9866 , 
> Current Datanode: DN2:9866, Is current datanode decommissioning: true, Is 
> current datanode entering maintenance: false
> {code}
> Being stuck in decommissioning state forever is not an intended behavior of 
> DataNode decommissioning
> A few potential solutions:
>  * Address the root cause of the problem which is an inconsistent state 
> between namenode & datanode: https://issues.apache.org/jira/browse/HDFS-721
>  * Detect when datanode decommissioning is stuck due to lack of available 
> datanodes for satisfying the minimum replication factor, then recover by 
> re-enabling the datanodes being decommissioned
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17555393#comment-17555393
 ] 

Akira Ajisaka commented on HDFS-16635:
--

I think we can remove the link to NameNode instead of importing NameNode class.

> Fix javadoc error in Java 11
> 
>
> Key: HDFS-16635
> URL: https://issues.apache.org/jira/browse/HDFS-16635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Major
>
> Javadoc build in Java 11 fails.
> {noformat}
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
>  error: reference not found
> [ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
> startup
> {noformat}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-16635:
-
Labels: newbie  (was: )

> Fix javadoc error in Java 11
> 
>
> Key: HDFS-16635
> URL: https://issues.apache.org/jira/browse/HDFS-16635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: newbie
>
> Javadoc build in Java 11 fails.
> {noformat}
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
>  error: reference not found
> [ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
> startup
> {noformat}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16576) Remove unused imports in HDFS project

2022-06-17 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17555392#comment-17555392
 ] 

Akira Ajisaka commented on HDFS-16576:
--

Hi [~groot]
This commit broke HDFS-16635. Sorry I missed the error while reviewing this PR. 
Could you fix it?

> Remove unused imports in HDFS project
> -
>
> Key: HDFS-16576
> URL: https://issues.apache.org/jira/browse/HDFS-16576
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> h3. Optimize Imports to keep code clean
>  # Remove any unused imports



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-16635:
-
Target Version/s: 3.4.0, 3.3.4  (was: 3.4.0)

> Fix javadoc error in Java 11
> 
>
> Key: HDFS-16635
> URL: https://issues.apache.org/jira/browse/HDFS-16635
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Major
>
> Javadoc build in Java 11 fails.
> {noformat}
> [ERROR] 
> /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
>  error: reference not found
> [ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
> startup
> {noformat}
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16635) Fix javadoc error in Java 11

2022-06-17 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-16635:


 Summary: Fix javadoc error in Java 11
 Key: HDFS-16635
 URL: https://issues.apache.org/jira/browse/HDFS-16635
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, documentation
Reporter: Akira Ajisaka


Javadoc build in Java 11 fails.

{noformat}
[ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4410/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/package-info.java:20:
 error: reference not found
[ERROR]  * This package provides a mechanism for tracking {@link NameNode} 
startup
{noformat}

https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4410/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org