[jira] [Assigned] (HDFS-16414) Improve asynchronous edit: reduce the time that edit waits to enter editPendingQ

2022-01-05 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu reassigned HDFS-16414:
---

Assignee: JiangHua Zhu

> Improve asynchronous edit: reduce the time that edit waits to enter 
> editPendingQ
> 
>
> Key: HDFS-16414
> URL: https://issues.apache.org/jira/browse/HDFS-16414
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs async, namenode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>
> When dfs.namenode.edits.asynclogging=true, FSEditLogAsync starts to work, 
> which is very helpful for improving the speed of NameNode processing Call.
> But there is a strange phenomenon here. When a lot of calls enter the 
> NameNode through RPC within a period of time, the handler will be forced to 
> wait, in seconds:
> FSEditLogAsync#enqueueEdit():
> if (Thread.holdsLock(this)) {
>int permits = overflowMutex.drainPermits();
>try {
>  do {
>this.wait(1000); // will be notified by next logSync.
>  } while (!editPendingQ.offer(edit));
>} finally {
>  overflowMutex.release(permits);
>}
>  }
> We should reduce this.wait(1000) here so that edit can be processed quickly.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16414) Improve asynchronous edit: reduce the time that edit waits to enter editPendingQ

2022-01-05 Thread JiangHua Zhu (Jira)
JiangHua Zhu created HDFS-16414:
---

 Summary: Improve asynchronous edit: reduce the time that edit 
waits to enter editPendingQ
 Key: HDFS-16414
 URL: https://issues.apache.org/jira/browse/HDFS-16414
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fs async, namenode
Affects Versions: 2.9.2
Reporter: JiangHua Zhu


When dfs.namenode.edits.asynclogging=true, FSEditLogAsync starts to work, which 
is very helpful for improving the speed of NameNode processing Call.
But there is a strange phenomenon here. When a lot of calls enter the NameNode 
through RPC within a period of time, the handler will be forced to wait, in 
seconds:
FSEditLogAsync#enqueueEdit():
if (Thread.holdsLock(this)) {
   int permits = overflowMutex.drainPermits();
   try {
 do {
   this.wait(1000); // will be notified by next logSync.
 } while (!editPendingQ.offer(edit));
   } finally {
 overflowMutex.release(permits);
   }
 }

We should reduce this.wait(1000) here so that edit can be processed quickly.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16095) Add lsQuotaList command and getQuotaListing api for hdfs quota

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16095?focusedWorklogId=704407=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704407
 ]

ASF GitHub Bot logged work on HDFS-16095:
-

Author: ASF GitHub Bot
Created on: 06/Jan/22 05:10
Start Date: 06/Jan/22 05:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3155:
URL: https://github.com/apache/hadoop/pull/3155#issuecomment-1006289619


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  20m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   3m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m  2s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   5m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  10m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  cc  |  23m 13s | 
[/results-compile-cc-root-jdkUbuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/results-compile-cc-root-jdkUbuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 generated 23 new + 300 unchanged - 23 
fixed = 323 total (was 323)  |
   | -1 :x: |  javac  |  23m 13s | 
[/results-compile-javac-root-jdkUbuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 generated 1 new + 1933 unchanged - 0 
fixed = 1934 total (was 1933)  |
   | +1 :green_heart: |  compile  |  20m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  cc  |  20m 26s | 
[/results-compile-cc-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/results-compile-cc-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1 new + 322 
unchanged - 1 fixed = 323 total (was 323)  |
   | -1 :x: |  javac  |  20m 26s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1 new + 1807 
unchanged - 0 fixed = 1808 total (was 1807)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/blanks-eol.txt)
 |  The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 58s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/results-checkstyle-root.txt)
 |  

[jira] [Work logged] (HDFS-16403) Improve FUSE IO performance by supporting FUSE parameter max_background

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16403?focusedWorklogId=704388=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704388
 ]

ASF GitHub Bot logged work on HDFS-16403:
-

Author: ASF GitHub Bot
Created on: 06/Jan/22 03:11
Start Date: 06/Jan/22 03:11
Worklog Time Spent: 10m 
  Work Description: cndaimin commented on pull request #3842:
URL: https://github.com/apache/hadoop/pull/3842#issuecomment-1006249942


   @sodonnel I have removed the unrelated code, could you please take a look 
again? Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704388)
Time Spent: 2h 20m  (was: 2h 10m)

> Improve FUSE IO performance by supporting FUSE parameter max_background
> ---
>
> Key: HDFS-16403
> URL: https://issues.apache.org/jira/browse/HDFS-16403
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fuse-dfs
>Affects Versions: 3.3.0, 3.3.1
>Reporter: daimin
>Assignee: daimin
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When we examining the FUSE IO performance on HDFS, we found that the 
> simultaneous IO requests number are limited to a fixed number, like 12. This 
> limitation makes the IO performance on FUSE client quite unacceptable. We did 
> some research on this and inspired by the article  [Performance and Resource 
> Utilization of FUSE User-Space File 
> Systems|https://dl.acm.org/doi/fullHtml/10.1145/3310148], clearly the FUSE 
> parameter '{{{}max_background{}}}' decides the simultaneous IO requests 
> number, which is 12 by default.
> We add 'max_background' to fuse_dfs mount options,  the FUSE kernel will take 
> effect when an option value is given.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16316) Improve DirectoryScanner: add regular file check related block

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16316?focusedWorklogId=704383=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704383
 ]

ASF GitHub Bot logged work on HDFS-16316:
-

Author: ASF GitHub Bot
Created on: 06/Jan/22 02:48
Start Date: 06/Jan/22 02:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3861:
URL: https://github.com/apache/hadoop/pull/3861#issuecomment-1006241356


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 56s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  20m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  20m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javadoc  |   1m 45s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3861/2/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  spotbugs  |   3m 50s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3861/2/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  23m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  17m 44s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3861/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  | 232m 43s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 13s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 464m 32s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Dead store to corruptBlock in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkAndUpdate(String,
 FsVolumeSpi$ScanInfo)  At 

[jira] [Work logged] (HDFS-16396) Reconfig slow peer parameters for datanode

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16396?focusedWorklogId=704378=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704378
 ]

ASF GitHub Bot logged work on HDFS-16396:
-

Author: ASF GitHub Bot
Created on: 06/Jan/22 02:22
Start Date: 06/Jan/22 02:22
Worklog Time Spent: 10m 
  Work Description: tomscut commented on a change in pull request #3827:
URL: https://github.com/apache/hadoop/pull/3827#discussion_r779258490



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
##
@@ -642,13 +656,76 @@ public String reconfigurePropertyImpl(String property, 
String newVal)
   }
   break;
 }
+case DFS_DATANODE_PEER_STATS_ENABLED_KEY:
+case DFS_DATANODE_MIN_OUTLIER_DETECTION_NODES_KEY:
+case DFS_DATANODE_SLOWPEER_LOW_THRESHOLD_MS_KEY:
+case DFS_DATANODE_PEER_METRICS_MIN_OUTLIER_DETECTION_SAMPLES_KEY:
+  return reconfSlowPeerParameters(property, newVal);
 default:
   break;
 }
 throw new ReconfigurationException(
 property, newVal, getConf().get(property));
   }
 
+  private String reconfSlowPeerParameters(String property, String newVal)
+  throws ReconfigurationException {
+String result;
+try {
+  LOG.info("Reconfiguring {} to {}", property, newVal);
+  if (property.equals(DFS_DATANODE_PEER_STATS_ENABLED_KEY)) {
+checkNotNull(dnConf, "DNConf has not been initialized.");
+if (newVal != null && !newVal.equalsIgnoreCase("true")
+&& !newVal.equalsIgnoreCase("false")) {
+  throw new IllegalArgumentException("Not a valid Boolean value for " 
+ property +
+  " in reconfSlowPeerParameters");
+}
+boolean enable = (newVal == null ? 
DFS_DATANODE_PEER_STATS_ENABLED_DEFAULT :
+Boolean.parseBoolean(newVal));
+result = Boolean.toString(enable);
+dnConf.setPeerStatsEnabled(enable);
+if (enable) {
+  if (peerMetrics == null) {
+peerMetrics = DataNodePeerMetrics.create(getDisplayName(), 
getConf());
+  }
+} else {
+  peerMetrics = null;

Review comment:
   > Thanks @ayushtkn for your comments.
   > 
   > To avoid NPE, I set `peerStatsEnabled`(volatile) first, and then I add 
judgment `dnConf.peerStatsEnabled && peerMetrics != null` before using 
`peerMetrics`. Do you think this is ok?
   
   Hi @ayushtkn , what do you think of this solution? Thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704378)
Time Spent: 2h  (was: 1h 50m)

> Reconfig slow peer parameters for datanode
> --
>
> Key: HDFS-16396
> URL: https://issues.apache.org/jira/browse/HDFS-16396
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In large clusters, rolling restart datanodes takes a long time. We can make 
> slow peers parameters and slow disks parameters in datanode reconfigurable to 
> facilitate cluster operation and maintenance.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16413) Reconfig dfs usage parameters for datanode

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16413?focusedWorklogId=704374=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704374
 ]

ASF GitHub Bot logged work on HDFS-16413:
-

Author: ASF GitHub Bot
Created on: 06/Jan/22 02:00
Start Date: 06/Jan/22 02:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3863:
URL: https://github.com/apache/hadoop/pull/3863#issuecomment-1006223517


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 15s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  30m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  26m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  23m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 54s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3863/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 212 unchanged - 0 fixed = 215 total (was 
212)  |
   | +1 :green_heart: |  mvnsite  |   3m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 37s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 348m 13s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 10s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 611m 12s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3863/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3863 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 8aefeb409351 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b0b27f4c11baf9360b9071c871835a20b65439af |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 

[jira] [Resolved] (HDFS-16371) Exclude slow disks when choosing volume

2022-01-05 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-16371.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Exclude slow disks when choosing volume
> ---
>
> Key: HDFS-16371
> URL: https://issues.apache.org/jira/browse/HDFS-16371
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Currently, the datanode can detect slow disks. See HDFS-11461.
> And after HDFS-16311, the slow disk information we collected is more accurate.
> So we can exclude these slow disks according to some rules when choosing 
> volume. This will prevents some slow disks from affecting the throughput of 
> the whole datanode.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16371) Exclude slow disks when choosing volume

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16371?focusedWorklogId=704351=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704351
 ]

ASF GitHub Bot logged work on HDFS-16371:
-

Author: ASF GitHub Bot
Created on: 06/Jan/22 00:33
Start Date: 06/Jan/22 00:33
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3753:
URL: https://github.com/apache/hadoop/pull/3753#issuecomment-1006187746


   Thanks @tasanuma for your review and merging this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704351)
Time Spent: 3.5h  (was: 3h 20m)

> Exclude slow disks when choosing volume
> ---
>
> Key: HDFS-16371
> URL: https://issues.apache.org/jira/browse/HDFS-16371
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Currently, the datanode can detect slow disks. See HDFS-11461.
> And after HDFS-16311, the slow disk information we collected is more accurate.
> So we can exclude these slow disks according to some rules when choosing 
> volume. This will prevents some slow disks from affecting the throughput of 
> the whole datanode.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16371) Exclude slow disks when choosing volume

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16371?focusedWorklogId=704347=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704347
 ]

ASF GitHub Bot logged work on HDFS-16371:
-

Author: ASF GitHub Bot
Created on: 06/Jan/22 00:31
Start Date: 06/Jan/22 00:31
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #3753:
URL: https://github.com/apache/hadoop/pull/3753#issuecomment-1006186602


   Merged it. Thanks for your contribution, @tomscut!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704347)
Time Spent: 3h 20m  (was: 3h 10m)

> Exclude slow disks when choosing volume
> ---
>
> Key: HDFS-16371
> URL: https://issues.apache.org/jira/browse/HDFS-16371
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Currently, the datanode can detect slow disks. See HDFS-11461.
> And after HDFS-16311, the slow disk information we collected is more accurate.
> So we can exclude these slow disks according to some rules when choosing 
> volume. This will prevents some slow disks from affecting the throughput of 
> the whole datanode.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16371) Exclude slow disks when choosing volume

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16371?focusedWorklogId=704346=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704346
 ]

ASF GitHub Bot logged work on HDFS-16371:
-

Author: ASF GitHub Bot
Created on: 06/Jan/22 00:31
Start Date: 06/Jan/22 00:31
Worklog Time Spent: 10m 
  Work Description: tasanuma merged pull request #3753:
URL: https://github.com/apache/hadoop/pull/3753


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704346)
Time Spent: 3h 10m  (was: 3h)

> Exclude slow disks when choosing volume
> ---
>
> Key: HDFS-16371
> URL: https://issues.apache.org/jira/browse/HDFS-16371
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Currently, the datanode can detect slow disks. See HDFS-11461.
> And after HDFS-16311, the slow disk information we collected is more accurate.
> So we can exclude these slow disks according to some rules when choosing 
> volume. This will prevents some slow disks from affecting the throughput of 
> the whole datanode.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16371) Exclude slow disks when choosing volume

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16371?focusedWorklogId=704329=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704329
 ]

ASF GitHub Bot logged work on HDFS-16371:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 23:50
Start Date: 05/Jan/22 23:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3753:
URL: https://github.com/apache/hadoop/pull/3753#issuecomment-1006166618


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 372m 24s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 472m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3753/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3753 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 3f6445cd199f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eb505749011297893ed44edbcd691387e0acf5da |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3753/6/testReport/ |
   | Max. process+thread count | 2856 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3753/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
  

[jira] [Work logged] (HDFS-11107) TestStartup#testStorageBlockContentsStaleAfterNNRestart fails intermittently

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-11107?focusedWorklogId=704209=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704209
 ]

ASF GitHub Bot logged work on HDFS-11107:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 20:30
Start Date: 05/Jan/22 20:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3862:
URL: https://github.com/apache/hadoop/pull/3862#issuecomment-1006055417


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 228m 14s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 330m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3862/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3862 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 13c72faf5be9 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 67c64a0598c4f485aa53a8dc732f99853a27eea2 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3862/1/testReport/ |
   | Max. process+thread count | 3027 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3862/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This 

[jira] [Assigned] (HDFS-14614) Add Secure Flag for DataNode Web UI Cookies

2022-01-05 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian reassigned HDFS-14614:
-

Assignee: (was: Vivek Ratnavel Subramanian)

> Add Secure Flag for DataNode Web UI Cookies
> ---
>
> Key: HDFS-14614
> URL: https://issues.apache.org/jira/browse/HDFS-14614
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> It looks like HDFS-7279 removed Secure Flag for DataNode Web UI. I think we 
> should add it back.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16403) Improve FUSE IO performance by supporting FUSE parameter max_background

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16403?focusedWorklogId=704154=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704154
 ]

ASF GitHub Bot logged work on HDFS-16403:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 18:33
Start Date: 05/Jan/22 18:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3842:
URL: https://github.com/apache/hadoop/pull/3842#issuecomment-1005977842


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  24m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   3m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 19s |  |  
branch/hadoop-hdfs-project/hadoop-hdfs-native-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  22m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  cc  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  cc  |   3m 27s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 14s |  |  
hadoop-hdfs-project/hadoop-hdfs-native-client has no data from spotbugs  |
   | +1 :green_heart: |  shadedclient  |  22m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  76m 21s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 166m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3842 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell shellcheck shelldocs cc golang spotbugs 
checkstyle |
   | uname | Linux 8f72fd1d301e 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae8786d87220bc2ef8eb1725fdab63d19f20c6c0 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 

[jira] [Resolved] (HDFS-16410) Insecure Xml parsing in OfflineEditsXmlLoader

2022-01-05 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun resolved HDFS-16410.
-
Fix Version/s: 3.4.0
   3.3.2
   Resolution: Fixed

> Insecure Xml parsing in OfflineEditsXmlLoader 
> --
>
> Key: HDFS-16410
> URL: https://issues.apache.org/jira/browse/HDFS-16410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available, security
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Insecure Xml parsing in OfflineEditsXmlLoader 
> [https://github.com/apache/hadoop/blob/03cfc852791c14fad39db4e5b14104a276c08e59/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsXmlLoader.java#L88]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16410) Insecure Xml parsing in OfflineEditsXmlLoader

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16410?focusedWorklogId=704141=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704141
 ]

ASF GitHub Bot logged work on HDFS-16410:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 18:02
Start Date: 05/Jan/22 18:02
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #3854:
URL: https://github.com/apache/hadoop/pull/3854#issuecomment-1005952115


   @steveloughran sure, I've cherry-picked it to branch-3.3 and branch-3.3.2


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704141)
Time Spent: 1h 20m  (was: 1h 10m)

> Insecure Xml parsing in OfflineEditsXmlLoader 
> --
>
> Key: HDFS-16410
> URL: https://issues.apache.org/jira/browse/HDFS-16410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available, security
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Insecure Xml parsing in OfflineEditsXmlLoader 
> [https://github.com/apache/hadoop/blob/03cfc852791c14fad39db4e5b14104a276c08e59/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsXmlLoader.java#L88]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16408) Ensure LeaseRecheckIntervalMs is greater than zero

2022-01-05 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-16408:

Fix Version/s: 3.3.2
   (was: 3.3.3)

> Ensure LeaseRecheckIntervalMs is greater than zero
> --
>
> Key: HDFS-16408
> URL: https://issues.apache.org/jira/browse/HDFS-16408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3, 3.3.1
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2, 3.2.4
>
>   Original Estimate: 1h
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> There is a problem with the try catch statement in the LeaseMonitor daemon 
> (in LeaseManager.java), when an unknown exception is caught, it simply prints 
> a warning message and continues with the next loop. 
> An extreme case is when the configuration item 
> 'dfs.namenode.lease-recheck-interval-ms' is accidentally set to a negative 
> number by the user, as the configuration item is read without checking its 
> range, 'fsnamesystem. getLeaseRecheckIntervalMs()' returns this value and is 
> used as an argument to Thread.sleep(). A negative argument will cause 
> Thread.sleep() to throw an IllegalArgumentException, which will be caught by 
> 'catch(Throwable e)' and a warning message will be printed. 
> This behavior is repeated for each subsequent loop. This means that a huge 
> amount of repetitive messages will be printed to the log file in a short 
> period of time, quickly consuming disk space and affecting the operation of 
> the system.
> As you can see, 178M log files are generated in one minute.
>  
> {code:java}
> ll logs/
> total 174456
> drwxrwxr-x  2 hadoop hadoop      4096 1月   3 15:13 ./
> drwxr-xr-x 11 hadoop hadoop      4096 1月   3 15:13 ../
> -rw-rw-r--  1 hadoop hadoop     36342 1月   3 15:14 
> hadoop-hadoop-datanode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      1243 1月   3 15:13 
> hadoop-hadoop-datanode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop 178545466 1月   3 15:14 
> hadoop-hadoop-namenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop       692 1月   3 15:13 
> hadoop-hadoop-namenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop     33201 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      3764 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop         0 1月   3 15:13 SecurityAuth-hadoop.audit
>  
> tail -n 15 logs/hadoop-hadoop-namenode-ljq1.log 
> 2022-01-03 15:14:46,032 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> {code}
>  
> I think there are two potential solutions. 
> The first is to adjust the position of the try catch statement in the 
> LeaseMonitor daemon by moving 'catch(Throwable e)' to the outside of the loop 
> body. This can be done like the NameNodeResourceMonitor daemon, which ends 
> the thread when an unexpected exception is caught. 
> The second is to use Precondition.checkArgument() to scope the configuration 
> item 'dfs.namenode.lease-recheck-interval-ms' when it is read, to avoid the 
> wrong configuration item can affect the subsequent operation of the program.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16316) Improve DirectoryScanner: add regular file check related block

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16316?focusedWorklogId=704128=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704128
 ]

ASF GitHub Bot logged work on HDFS-16316:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 17:46
Start Date: 05/Jan/22 17:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3861:
URL: https://github.com/apache/hadoop/pull/3861#issuecomment-1005940570


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  28m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  compile  |   8m 33s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3861/1/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -0 :warning: |  checkstyle  |   0m 39s | 
[/buildtool-branch-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3861/1/artifact/out/buildtool-branch-checkstyle-root.txt)
 |  The patch fails to run checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 40s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3861/1/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 40s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3861/1/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 41s | 
[/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3861/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | +1 :green_heart: |  javadoc  |   3m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 54s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   8m 47s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  23m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  19m 27s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3861/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 811 new + 995 
unchanged - 0 fixed = 1806 total (was 995)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 44s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3861/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 148 new + 0 unchanged - 0 fixed = 148 total (was 
0)  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 11s | 

[jira] [Resolved] (HDFS-16408) Ensure LeaseRecheckIntervalMs is greater than zero

2022-01-05 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell resolved HDFS-16408.
--
Resolution: Fixed

> Ensure LeaseRecheckIntervalMs is greater than zero
> --
>
> Key: HDFS-16408
> URL: https://issues.apache.org/jira/browse/HDFS-16408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3, 3.3.1
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
>   Original Estimate: 1h
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> There is a problem with the try catch statement in the LeaseMonitor daemon 
> (in LeaseManager.java), when an unknown exception is caught, it simply prints 
> a warning message and continues with the next loop. 
> An extreme case is when the configuration item 
> 'dfs.namenode.lease-recheck-interval-ms' is accidentally set to a negative 
> number by the user, as the configuration item is read without checking its 
> range, 'fsnamesystem. getLeaseRecheckIntervalMs()' returns this value and is 
> used as an argument to Thread.sleep(). A negative argument will cause 
> Thread.sleep() to throw an IllegalArgumentException, which will be caught by 
> 'catch(Throwable e)' and a warning message will be printed. 
> This behavior is repeated for each subsequent loop. This means that a huge 
> amount of repetitive messages will be printed to the log file in a short 
> period of time, quickly consuming disk space and affecting the operation of 
> the system.
> As you can see, 178M log files are generated in one minute.
>  
> {code:java}
> ll logs/
> total 174456
> drwxrwxr-x  2 hadoop hadoop      4096 1月   3 15:13 ./
> drwxr-xr-x 11 hadoop hadoop      4096 1月   3 15:13 ../
> -rw-rw-r--  1 hadoop hadoop     36342 1月   3 15:14 
> hadoop-hadoop-datanode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      1243 1月   3 15:13 
> hadoop-hadoop-datanode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop 178545466 1月   3 15:14 
> hadoop-hadoop-namenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop       692 1月   3 15:13 
> hadoop-hadoop-namenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop     33201 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      3764 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop         0 1月   3 15:13 SecurityAuth-hadoop.audit
>  
> tail -n 15 logs/hadoop-hadoop-namenode-ljq1.log 
> 2022-01-03 15:14:46,032 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> {code}
>  
> I think there are two potential solutions. 
> The first is to adjust the position of the try catch statement in the 
> LeaseMonitor daemon by moving 'catch(Throwable e)' to the outside of the loop 
> body. This can be done like the NameNodeResourceMonitor daemon, which ends 
> the thread when an unexpected exception is caught. 
> The second is to use Precondition.checkArgument() to scope the configuration 
> item 'dfs.namenode.lease-recheck-interval-ms' when it is read, to avoid the 
> wrong configuration item can affect the subsequent operation of the program.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16408) Ensure LeaseRecheckIntervalMs is greater than zero

2022-01-05 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-16408:
-
Fix Version/s: 3.4.0
   3.2.4
   3.3.3

> Ensure LeaseRecheckIntervalMs is greater than zero
> --
>
> Key: HDFS-16408
> URL: https://issues.apache.org/jira/browse/HDFS-16408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3, 3.3.1
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.4, 3.3.3
>
>   Original Estimate: 1h
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> There is a problem with the try catch statement in the LeaseMonitor daemon 
> (in LeaseManager.java), when an unknown exception is caught, it simply prints 
> a warning message and continues with the next loop. 
> An extreme case is when the configuration item 
> 'dfs.namenode.lease-recheck-interval-ms' is accidentally set to a negative 
> number by the user, as the configuration item is read without checking its 
> range, 'fsnamesystem. getLeaseRecheckIntervalMs()' returns this value and is 
> used as an argument to Thread.sleep(). A negative argument will cause 
> Thread.sleep() to throw an IllegalArgumentException, which will be caught by 
> 'catch(Throwable e)' and a warning message will be printed. 
> This behavior is repeated for each subsequent loop. This means that a huge 
> amount of repetitive messages will be printed to the log file in a short 
> period of time, quickly consuming disk space and affecting the operation of 
> the system.
> As you can see, 178M log files are generated in one minute.
>  
> {code:java}
> ll logs/
> total 174456
> drwxrwxr-x  2 hadoop hadoop      4096 1月   3 15:13 ./
> drwxr-xr-x 11 hadoop hadoop      4096 1月   3 15:13 ../
> -rw-rw-r--  1 hadoop hadoop     36342 1月   3 15:14 
> hadoop-hadoop-datanode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      1243 1月   3 15:13 
> hadoop-hadoop-datanode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop 178545466 1月   3 15:14 
> hadoop-hadoop-namenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop       692 1月   3 15:13 
> hadoop-hadoop-namenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop     33201 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      3764 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop         0 1月   3 15:13 SecurityAuth-hadoop.audit
>  
> tail -n 15 logs/hadoop-hadoop-namenode-ljq1.log 
> 2022-01-03 15:14:46,032 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> {code}
>  
> I think there are two potential solutions. 
> The first is to adjust the position of the try catch statement in the 
> LeaseMonitor daemon by moving 'catch(Throwable e)' to the outside of the loop 
> body. This can be done like the NameNodeResourceMonitor daemon, which ends 
> the thread when an unexpected exception is caught. 
> The second is to use Precondition.checkArgument() to scope the configuration 
> item 'dfs.namenode.lease-recheck-interval-ms' when it is read, to avoid the 
> wrong configuration item can affect the subsequent operation of the program.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16371) Exclude slow disks when choosing volume

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16371?focusedWorklogId=704080=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704080
 ]

ASF GitHub Bot logged work on HDFS-16371:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 15:55
Start Date: 05/Jan/22 15:55
Worklog Time Spent: 10m 
  Work Description: tomscut commented on a change in pull request #3753:
URL: https://github.com/apache/hadoop/pull/3753#discussion_r778934705



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
##
@@ -127,6 +142,21 @@ public void run() {
 
 detectAndUpdateDiskOutliers(metadataOpStats, readIoStats,
 writeIoStats);
+
+// Sort the slow disks by latency and .

Review comment:
   @tasanuma Happy New Year! I am sorry for this and I will fix it right 
away.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704080)
Time Spent: 2h 50m  (was: 2h 40m)

> Exclude slow disks when choosing volume
> ---
>
> Key: HDFS-16371
> URL: https://issues.apache.org/jira/browse/HDFS-16371
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Currently, the datanode can detect slow disks. See HDFS-11461.
> And after HDFS-16311, the slow disk information we collected is more accurate.
> So we can exclude these slow disks according to some rules when choosing 
> volume. This will prevents some slow disks from affecting the throughput of 
> the whole datanode.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16410) Insecure Xml parsing in OfflineEditsXmlLoader

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16410?focusedWorklogId=704076=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704076
 ]

ASF GitHub Bot logged work on HDFS-16410:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 15:52
Start Date: 05/Jan/22 15:52
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3854:
URL: https://github.com/apache/hadoop/pull/3854#issuecomment-1005846738


   @sunchao you think this is needed in the 3.3.2 release?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704076)
Time Spent: 1h 10m  (was: 1h)

> Insecure Xml parsing in OfflineEditsXmlLoader 
> --
>
> Key: HDFS-16410
> URL: https://issues.apache.org/jira/browse/HDFS-16410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available, security
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Insecure Xml parsing in OfflineEditsXmlLoader 
> [https://github.com/apache/hadoop/blob/03cfc852791c14fad39db4e5b14104a276c08e59/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsXmlLoader.java#L88]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16410) Insecure Xml parsing in OfflineEditsXmlLoader

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16410?focusedWorklogId=704073=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704073
 ]

ASF GitHub Bot logged work on HDFS-16410:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 15:51
Start Date: 05/Jan/22 15:51
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #3854:
URL: https://github.com/apache/hadoop/pull/3854


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704073)
Time Spent: 1h  (was: 50m)

> Insecure Xml parsing in OfflineEditsXmlLoader 
> --
>
> Key: HDFS-16410
> URL: https://issues.apache.org/jira/browse/HDFS-16410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available, security
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Insecure Xml parsing in OfflineEditsXmlLoader 
> [https://github.com/apache/hadoop/blob/03cfc852791c14fad39db4e5b14104a276c08e59/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsXmlLoader.java#L88]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16413) Reconfig dfs usage parameters for datanode

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16413:
--
Labels: pull-request-available  (was: )

> Reconfig dfs usage parameters for datanode
> --
>
> Key: HDFS-16413
> URL: https://issues.apache.org/jira/browse/HDFS-16413
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Reconfig dfs usage parameters for datanode.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16413) Reconfig dfs usage parameters for datanode

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16413?focusedWorklogId=704070=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704070
 ]

ASF GitHub Bot logged work on HDFS-16413:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 15:47
Start Date: 05/Jan/22 15:47
Worklog Time Spent: 10m 
  Work Description: tomscut opened a new pull request #3863:
URL: https://github.com/apache/hadoop/pull/3863


   JIRA: [HDFS-16413](https://issues.apache.org/jira/browse/HDFS-16413).
   
   Reconfig dfs usage parameters for datanode.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704070)
Remaining Estimate: 0h
Time Spent: 10m

> Reconfig dfs usage parameters for datanode
> --
>
> Key: HDFS-16413
> URL: https://issues.apache.org/jira/browse/HDFS-16413
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Reconfig dfs usage parameters for datanode.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16413) Reconfig dfs usage parameters for datanode

2022-01-05 Thread tomscut (Jira)
tomscut created HDFS-16413:
--

 Summary: Reconfig dfs usage parameters for datanode
 Key: HDFS-16413
 URL: https://issues.apache.org/jira/browse/HDFS-16413
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: tomscut
Assignee: tomscut


Reconfig dfs usage parameters for datanode.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16403) Improve FUSE IO performance by supporting FUSE parameter max_background

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16403?focusedWorklogId=704068=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704068
 ]

ASF GitHub Bot logged work on HDFS-16403:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 15:46
Start Date: 05/Jan/22 15:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3842:
URL: https://github.com/apache/hadoop/pull/3842#issuecomment-1005840959


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  21m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 22s |  |  
branch/hadoop-hdfs-project/hadoop-hdfs-native-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  31m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  3s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m  3s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m  3s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m  3s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed  |
   | +0 :ok: |  spotbugs  |   0m 16s |  |  
hadoop-hdfs-project/hadoop-hdfs-native-client has no data from spotbugs  |
   | +1 :green_heart: |  shadedclient  |  30m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  58m 51s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 189m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3842 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell shellcheck shelldocs cc golang spotbugs 
checkstyle |
   | uname | Linux 5c26a2bdf7e1 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae8786d87220bc2ef8eb1725fdab63d19f20c6c0 |
   | Default Java | Debian-11.0.13+8-post-Debian-1deb10u1 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/3/testReport/ |
   | Max. process+thread count | 610 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/3/console |
   | versions | git=2.20.1 maven=3.6.0 shellcheck=0.5.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704068)
Time Spent: 2h  (was: 1h 50m)

> Improve FUSE IO performance by supporting FUSE 

[jira] [Assigned] (HDFS-16408) Ensure LeaseRecheckIntervalMs is greater than zero

2022-01-05 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell reassigned HDFS-16408:


Assignee: Jingxuan Fu  (was: Stephen O'Donnell)

> Ensure LeaseRecheckIntervalMs is greater than zero
> --
>
> Key: HDFS-16408
> URL: https://issues.apache.org/jira/browse/HDFS-16408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3, 3.3.1
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> There is a problem with the try catch statement in the LeaseMonitor daemon 
> (in LeaseManager.java), when an unknown exception is caught, it simply prints 
> a warning message and continues with the next loop. 
> An extreme case is when the configuration item 
> 'dfs.namenode.lease-recheck-interval-ms' is accidentally set to a negative 
> number by the user, as the configuration item is read without checking its 
> range, 'fsnamesystem. getLeaseRecheckIntervalMs()' returns this value and is 
> used as an argument to Thread.sleep(). A negative argument will cause 
> Thread.sleep() to throw an IllegalArgumentException, which will be caught by 
> 'catch(Throwable e)' and a warning message will be printed. 
> This behavior is repeated for each subsequent loop. This means that a huge 
> amount of repetitive messages will be printed to the log file in a short 
> period of time, quickly consuming disk space and affecting the operation of 
> the system.
> As you can see, 178M log files are generated in one minute.
>  
> {code:java}
> ll logs/
> total 174456
> drwxrwxr-x  2 hadoop hadoop      4096 1月   3 15:13 ./
> drwxr-xr-x 11 hadoop hadoop      4096 1月   3 15:13 ../
> -rw-rw-r--  1 hadoop hadoop     36342 1月   3 15:14 
> hadoop-hadoop-datanode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      1243 1月   3 15:13 
> hadoop-hadoop-datanode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop 178545466 1月   3 15:14 
> hadoop-hadoop-namenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop       692 1月   3 15:13 
> hadoop-hadoop-namenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop     33201 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      3764 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop         0 1月   3 15:13 SecurityAuth-hadoop.audit
>  
> tail -n 15 logs/hadoop-hadoop-namenode-ljq1.log 
> 2022-01-03 15:14:46,032 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> {code}
>  
> I think there are two potential solutions. 
> The first is to adjust the position of the try catch statement in the 
> LeaseMonitor daemon by moving 'catch(Throwable e)' to the outside of the loop 
> body. This can be done like the NameNodeResourceMonitor daemon, which ends 
> the thread when an unexpected exception is caught. 
> The second is to use Precondition.checkArgument() to scope the configuration 
> item 'dfs.namenode.lease-recheck-interval-ms' when it is read, to avoid the 
> wrong configuration item can affect the subsequent operation of the program.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16408) Ensure LeaseRecheckIntervalMs is greater than zero

2022-01-05 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell reassigned HDFS-16408:


Assignee: Stephen O'Donnell

> Ensure LeaseRecheckIntervalMs is greater than zero
> --
>
> Key: HDFS-16408
> URL: https://issues.apache.org/jira/browse/HDFS-16408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3, 3.3.1
>Reporter: Jingxuan Fu
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> There is a problem with the try catch statement in the LeaseMonitor daemon 
> (in LeaseManager.java), when an unknown exception is caught, it simply prints 
> a warning message and continues with the next loop. 
> An extreme case is when the configuration item 
> 'dfs.namenode.lease-recheck-interval-ms' is accidentally set to a negative 
> number by the user, as the configuration item is read without checking its 
> range, 'fsnamesystem. getLeaseRecheckIntervalMs()' returns this value and is 
> used as an argument to Thread.sleep(). A negative argument will cause 
> Thread.sleep() to throw an IllegalArgumentException, which will be caught by 
> 'catch(Throwable e)' and a warning message will be printed. 
> This behavior is repeated for each subsequent loop. This means that a huge 
> amount of repetitive messages will be printed to the log file in a short 
> period of time, quickly consuming disk space and affecting the operation of 
> the system.
> As you can see, 178M log files are generated in one minute.
>  
> {code:java}
> ll logs/
> total 174456
> drwxrwxr-x  2 hadoop hadoop      4096 1月   3 15:13 ./
> drwxr-xr-x 11 hadoop hadoop      4096 1月   3 15:13 ../
> -rw-rw-r--  1 hadoop hadoop     36342 1月   3 15:14 
> hadoop-hadoop-datanode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      1243 1月   3 15:13 
> hadoop-hadoop-datanode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop 178545466 1月   3 15:14 
> hadoop-hadoop-namenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop       692 1月   3 15:13 
> hadoop-hadoop-namenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop     33201 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      3764 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop         0 1月   3 15:13 SecurityAuth-hadoop.audit
>  
> tail -n 15 logs/hadoop-hadoop-namenode-ljq1.log 
> 2022-01-03 15:14:46,032 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> {code}
>  
> I think there are two potential solutions. 
> The first is to adjust the position of the try catch statement in the 
> LeaseMonitor daemon by moving 'catch(Throwable e)' to the outside of the loop 
> body. This can be done like the NameNodeResourceMonitor daemon, which ends 
> the thread when an unexpected exception is caught. 
> The second is to use Precondition.checkArgument() to scope the configuration 
> item 'dfs.namenode.lease-recheck-interval-ms' when it is read, to avoid the 
> wrong configuration item can affect the subsequent operation of the program.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16408) Ensure LeaseRecheckIntervalMs is greater than zero

2022-01-05 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-16408:
-
Summary: Ensure LeaseRecheckIntervalMs is greater than zero  (was: Negative 
LeaseRecheckIntervalMs will let LeaseMonitor loop forever and print huge amount 
of log)

> Ensure LeaseRecheckIntervalMs is greater than zero
> --
>
> Key: HDFS-16408
> URL: https://issues.apache.org/jira/browse/HDFS-16408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3, 3.3.1
>Reporter: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> There is a problem with the try catch statement in the LeaseMonitor daemon 
> (in LeaseManager.java), when an unknown exception is caught, it simply prints 
> a warning message and continues with the next loop. 
> An extreme case is when the configuration item 
> 'dfs.namenode.lease-recheck-interval-ms' is accidentally set to a negative 
> number by the user, as the configuration item is read without checking its 
> range, 'fsnamesystem. getLeaseRecheckIntervalMs()' returns this value and is 
> used as an argument to Thread.sleep(). A negative argument will cause 
> Thread.sleep() to throw an IllegalArgumentException, which will be caught by 
> 'catch(Throwable e)' and a warning message will be printed. 
> This behavior is repeated for each subsequent loop. This means that a huge 
> amount of repetitive messages will be printed to the log file in a short 
> period of time, quickly consuming disk space and affecting the operation of 
> the system.
> As you can see, 178M log files are generated in one minute.
>  
> {code:java}
> ll logs/
> total 174456
> drwxrwxr-x  2 hadoop hadoop      4096 1月   3 15:13 ./
> drwxr-xr-x 11 hadoop hadoop      4096 1月   3 15:13 ../
> -rw-rw-r--  1 hadoop hadoop     36342 1月   3 15:14 
> hadoop-hadoop-datanode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      1243 1月   3 15:13 
> hadoop-hadoop-datanode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop 178545466 1月   3 15:14 
> hadoop-hadoop-namenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop       692 1月   3 15:13 
> hadoop-hadoop-namenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop     33201 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      3764 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop         0 1月   3 15:13 SecurityAuth-hadoop.audit
>  
> tail -n 15 logs/hadoop-hadoop-namenode-ljq1.log 
> 2022-01-03 15:14:46,032 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> {code}
>  
> I think there are two potential solutions. 
> The first is to adjust the position of the try catch statement in the 
> LeaseMonitor daemon by moving 'catch(Throwable e)' to the outside of the loop 
> body. This can be done like the NameNodeResourceMonitor daemon, which ends 
> the thread when an unexpected exception is caught. 
> The second is to use Precondition.checkArgument() to scope the configuration 
> item 'dfs.namenode.lease-recheck-interval-ms' when it is read, to avoid the 
> wrong configuration item can affect the subsequent operation of the program.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16408) Ensure LeaseRecheckIntervalMs is greater than zero

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16408?focusedWorklogId=704064=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704064
 ]

ASF GitHub Bot logged work on HDFS-16408:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 15:43
Start Date: 05/Jan/22 15:43
Worklog Time Spent: 10m 
  Work Description: sodonnel merged pull request #3856:
URL: https://github.com/apache/hadoop/pull/3856


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704064)
Time Spent: 3h 20m  (was: 3h 10m)

> Ensure LeaseRecheckIntervalMs is greater than zero
> --
>
> Key: HDFS-16408
> URL: https://issues.apache.org/jira/browse/HDFS-16408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3, 3.3.1
>Reporter: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> There is a problem with the try catch statement in the LeaseMonitor daemon 
> (in LeaseManager.java), when an unknown exception is caught, it simply prints 
> a warning message and continues with the next loop. 
> An extreme case is when the configuration item 
> 'dfs.namenode.lease-recheck-interval-ms' is accidentally set to a negative 
> number by the user, as the configuration item is read without checking its 
> range, 'fsnamesystem. getLeaseRecheckIntervalMs()' returns this value and is 
> used as an argument to Thread.sleep(). A negative argument will cause 
> Thread.sleep() to throw an IllegalArgumentException, which will be caught by 
> 'catch(Throwable e)' and a warning message will be printed. 
> This behavior is repeated for each subsequent loop. This means that a huge 
> amount of repetitive messages will be printed to the log file in a short 
> period of time, quickly consuming disk space and affecting the operation of 
> the system.
> As you can see, 178M log files are generated in one minute.
>  
> {code:java}
> ll logs/
> total 174456
> drwxrwxr-x  2 hadoop hadoop      4096 1月   3 15:13 ./
> drwxr-xr-x 11 hadoop hadoop      4096 1月   3 15:13 ../
> -rw-rw-r--  1 hadoop hadoop     36342 1月   3 15:14 
> hadoop-hadoop-datanode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      1243 1月   3 15:13 
> hadoop-hadoop-datanode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop 178545466 1月   3 15:14 
> hadoop-hadoop-namenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop       692 1月   3 15:13 
> hadoop-hadoop-namenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop     33201 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      3764 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop         0 1月   3 15:13 SecurityAuth-hadoop.audit
>  
> tail -n 15 logs/hadoop-hadoop-namenode-ljq1.log 
> 2022-01-03 15:14:46,032 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> {code}
>  
> I think there are two potential solutions. 
> The first is to adjust the position of the try catch statement in the 
> LeaseMonitor daemon by moving 'catch(Throwable e)' to the outside of the loop 
> body. This can be done like the NameNodeResourceMonitor daemon, which ends 
> the thread when an unexpected exception is caught. 
> The second is to use Precondition.checkArgument() to scope the configuration 
> item 

[jira] [Work logged] (HDFS-11107) TestStartup#testStorageBlockContentsStaleAfterNNRestart fails intermittently

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-11107?focusedWorklogId=704040=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704040
 ]

ASF GitHub Bot logged work on HDFS-11107:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 15:08
Start Date: 05/Jan/22 15:08
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3862:
URL: https://github.com/apache/hadoop/pull/3862#issuecomment-1005767451


   Before repairing, it usually takes more than 60s to test once.
   For example, here is the time consuming I tested:
   1. Time-consuming: 77254 ms
   2. Time-consuming: 77525 ms
   3. Time-consuming: 77083 ms
   4. Time-consuming: 76345 ms
   5. Time-consuming: 76616 ms
   
   I found 2 locations to be very time consuming.
   1. Use socket to get host
   `
   org.junit.runners.model.TestTimedOutException: test timed out after 6 
milliseconds
   
at java.net.InetAddress.getLocalHost(InetAddress.java:1487)
at sun.management.VMManagementImpl.getVmId(VMManagementImpl.java:140)
   `
   
   2. The connection related to the host.
   `
   2022-01-05 22:24:57,012 [BP-787407874-10.242.143.133-1641392642386 
heartbeating to localhost/127.0.0.1:49461] INFO  ipc.Client 
(Client.java:handleConnectionFailure(1010)) - Retrying connect to server: 
localhost/127.0.0.1:49461. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
   2022-01-05 22:24:58,016 [BP-787407874-10.242.143.133-1641392642386 
heartbeating to localhost/127.0.0.1:49461] INFO  ipc.Client 
(Client.java:handleConnectionFailure(1010)) - Retrying connect to server: 
localhost/127.0.0.1:49461. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
   
   `
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704040)
Time Spent: 20m  (was: 10m)

> TestStartup#testStorageBlockContentsStaleAfterNNRestart fails intermittently
> 
>
> Key: HDFS-11107
> URL: https://issues.apache.org/jira/browse/HDFS-11107
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Ajith S
>Priority: Minor
>  Labels: flaky-test, pull-request-available, unit-test
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> It's noticed that this failed in the last Jenkins run of HDFS-11085, but it's 
> not reproducible and passed with and without the patch.
> {noformat}
> Error Message
> expected:<0> but was:<2>
> Stacktrace
> java.lang.AssertionError: expected:<0> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testStorageBlockContentsStaleAfterNNRestart(TestStartup.java:726)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16371) Exclude slow disks when choosing volume

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16371?focusedWorklogId=704039=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704039
 ]

ASF GitHub Bot logged work on HDFS-16371:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 15:06
Start Date: 05/Jan/22 15:06
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on a change in pull request #3753:
URL: https://github.com/apache/hadoop/pull/3753#discussion_r778893651



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
##
@@ -127,6 +142,21 @@ public void run() {
 
 detectAndUpdateDiskOutliers(metadataOpStats, readIoStats,
 writeIoStats);
+
+// Sort the slow disks by latency and .

Review comment:
   @tomscut Happy New Year! Sorry for taking a long time.
   The comment in this line seems incomplete. Could you fix or remove it? Then 
+1 from me.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704039)
Time Spent: 2h 40m  (was: 2.5h)

> Exclude slow disks when choosing volume
> ---
>
> Key: HDFS-16371
> URL: https://issues.apache.org/jira/browse/HDFS-16371
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, the datanode can detect slow disks. See HDFS-11461.
> And after HDFS-16311, the slow disk information we collected is more accurate.
> So we can exclude these slow disks according to some rules when choosing 
> volume. This will prevents some slow disks from affecting the throughput of 
> the whole datanode.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11107) TestStartup#testStorageBlockContentsStaleAfterNNRestart fails intermittently

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-11107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-11107:
--
Labels: flaky-test pull-request-available unit-test  (was: flaky-test 
unit-test)

> TestStartup#testStorageBlockContentsStaleAfterNNRestart fails intermittently
> 
>
> Key: HDFS-11107
> URL: https://issues.apache.org/jira/browse/HDFS-11107
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Ajith S
>Priority: Minor
>  Labels: flaky-test, pull-request-available, unit-test
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It's noticed that this failed in the last Jenkins run of HDFS-11085, but it's 
> not reproducible and passed with and without the patch.
> {noformat}
> Error Message
> expected:<0> but was:<2>
> Stacktrace
> java.lang.AssertionError: expected:<0> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testStorageBlockContentsStaleAfterNNRestart(TestStartup.java:726)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-11107) TestStartup#testStorageBlockContentsStaleAfterNNRestart fails intermittently

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-11107?focusedWorklogId=704031=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-704031
 ]

ASF GitHub Bot logged work on HDFS-11107:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 14:59
Start Date: 05/Jan/22 14:59
Worklog Time Spent: 10m 
  Work Description: jianghuazhu opened a new pull request #3862:
URL: https://github.com/apache/hadoop/pull/3862


   
   ### Description of PR
   Fix TestStartup#testStorageBlockContentsStaleAfterNNRestart test failure.
   
   ### How was this patch tested?
   UT.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 704031)
Remaining Estimate: 0h
Time Spent: 10m

> TestStartup#testStorageBlockContentsStaleAfterNNRestart fails intermittently
> 
>
> Key: HDFS-11107
> URL: https://issues.apache.org/jira/browse/HDFS-11107
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Ajith S
>Priority: Minor
>  Labels: flaky-test, pull-request-available, unit-test
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It's noticed that this failed in the last Jenkins run of HDFS-11085, but it's 
> not reproducible and passed with and without the patch.
> {noformat}
> Error Message
> expected:<0> but was:<2>
> Stacktrace
> java.lang.AssertionError: expected:<0> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testStorageBlockContentsStaleAfterNNRestart(TestStartup.java:726)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11107) TestStartup#testStorageBlockContentsStaleAfterNNRestart fails intermittently

2022-01-05 Thread JiangHua Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-11107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17469371#comment-17469371
 ] 

JiangHua Zhu commented on HDFS-11107:
-

hi [~ajithshetty], please allow me to submit a pr.
I will try to solve this problem.


> TestStartup#testStorageBlockContentsStaleAfterNNRestart fails intermittently
> 
>
> Key: HDFS-11107
> URL: https://issues.apache.org/jira/browse/HDFS-11107
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Ajith S
>Priority: Minor
>  Labels: flaky-test, unit-test
>
> It's noticed that this failed in the last Jenkins run of HDFS-11085, but it's 
> not reproducible and passed with and without the patch.
> {noformat}
> Error Message
> expected:<0> but was:<2>
> Stacktrace
> java.lang.AssertionError: expected:<0> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testStorageBlockContentsStaleAfterNNRestart(TestStartup.java:726)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16410) Insecure Xml parsing in OfflineEditsXmlLoader

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16410?focusedWorklogId=703992=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703992
 ]

ASF GitHub Bot logged work on HDFS-16410:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 14:15
Start Date: 05/Jan/22 14:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3854:
URL: https://github.com/apache/hadoop/pull/3854#issuecomment-1005721773


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 232m 13s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 341m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3854/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3854 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux c3b9fe95a9dc 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d9de11db3a13e7b2c081e7d73e240c25a3c5d45e |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3854/2/testReport/ |
   | Max. process+thread count | 2987 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3854/2/console |
   | versions | 

[jira] [Work logged] (HDFS-16403) Improve FUSE IO performance by supporting FUSE parameter max_background

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16403?focusedWorklogId=703917=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703917
 ]

ASF GitHub Bot logged work on HDFS-16403:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 13:03
Start Date: 05/Jan/22 13:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3842:
URL: https://github.com/apache/hadoop/pull/3842#issuecomment-1005667099


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  27m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  25m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 22s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  
branch/hadoop-hdfs-project/hadoop-hdfs-native-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  21m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed  |
   | +0 :ok: |  spotbugs  |   0m 23s |  |  
hadoop-hdfs-project/hadoop-hdfs-native-client has no data from spotbugs  |
   | +1 :green_heart: |  shadedclient  |  22m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 128m 52s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 241m 47s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed CTEST tests | test_test_libhdfs_threaded_hdfs_static |
   |   | test_test_libhdfs_zerocopy_hdfs_static |
   |   | test_test_native_mini_dfs |
   |   | remote_block_reader |
   |   | memcheck_remote_block_reader |
   |   | sasl_digest_md5 |
   |   | memcheck_sasl_digest_md5 |
   |   | retry_policy |
   |   | memcheck_retry_policy |
   |   | rpc_engine |
   |   | memcheck_rpc_engine |
   |   | bad_datanode |
   |   | memcheck_bad_datanode |
   |   | node_exclusion |
   |   | memcheck_node_exclusion |
   |   | configuration |
   |   | memcheck_configuration |
   |   | hdfs_configuration |
   |   | memcheck_hdfs_configuration |
   |   | hdfspp_errors |
   |   | memcheck_hdfspp_errors |
   |   | hdfs_builder_test |
   |   | memcheck_hdfs_builder_test |
   |   | logging_test |
   |   | memcheck_logging_test |
   |   | hdfs_ioservice |
   |   | memcheck_hdfs_ioservice |
   |   | user_lock |
   |   | memcheck_user_lock |
   |   | hdfs_config_connect_bugs |
   |   | memcheck_hdfs_config_connect_bugs |
   |   | test_libhdfs_threaded_hdfspp_test_shim_static |
   |   | test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static |
   |   | libhdfs_mini_stress_valgrind_hdfspp_test_static |
   |   | memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static |
   |   | test_libhdfs_mini_stress_hdfspp_test_shim_static |
   |   | test_hdfs_ext_hdfspp_test_shim_static |
   |   | x_platform_utils_test |
   |   | x_platform_syscall_test |
   |   | hdfs_tool_tests |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 

[jira] [Work logged] (HDFS-16403) Improve FUSE IO performance by supporting FUSE parameter max_background

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16403?focusedWorklogId=703906=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703906
 ]

ASF GitHub Bot logged work on HDFS-16403:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 12:36
Start Date: 05/Jan/22 12:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3842:
URL: https://github.com/apache/hadoop/pull/3842#issuecomment-1005649317


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  30m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  25m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 32s |  |  
branch/hadoop-hdfs-project/hadoop-hdfs-native-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  22m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m  0s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   4m  0s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   4m  0s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed  |
   | +0 :ok: |  spotbugs  |   0m 18s |  |  
hadoop-hdfs-project/hadoop-hdfs-native-client has no data from spotbugs  |
   | +1 :green_heart: |  shadedclient  |  23m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  67m 50s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 185m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3842 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell shellcheck shelldocs cc golang spotbugs 
checkstyle |
   | uname | Linux 6054726b5b4d 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae8786d87220bc2ef8eb1725fdab63d19f20c6c0 |
   | Default Java | Red Hat, Inc.-1.8.0_312-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/3/testReport/ |
   | Max. process+thread count | 576 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/3/console |
   | versions | git=2.27.0 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 703906)
Time Spent: 1h 40m  (was: 1.5h)

> Improve FUSE IO performance by supporting FUSE parameter 

[jira] [Work logged] (HDFS-16410) Insecure Xml parsing in OfflineEditsXmlLoader

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16410?focusedWorklogId=703891=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703891
 ]

ASF GitHub Bot logged work on HDFS-16410:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 11:46
Start Date: 05/Jan/22 11:46
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3854:
URL: https://github.com/apache/hadoop/pull/3854#issuecomment-1005616972


   approved, once yetus is happy I will merge it in


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 703891)
Time Spent: 40m  (was: 0.5h)

> Insecure Xml parsing in OfflineEditsXmlLoader 
> --
>
> Key: HDFS-16410
> URL: https://issues.apache.org/jira/browse/HDFS-16410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available, security
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Insecure Xml parsing in OfflineEditsXmlLoader 
> [https://github.com/apache/hadoop/blob/03cfc852791c14fad39db4e5b14104a276c08e59/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsXmlLoader.java#L88]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16408) Negative LeaseRecheckIntervalMs will let LeaseMonitor loop forever and print huge amount of log

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16408?focusedWorklogId=703890=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703890
 ]

ASF GitHub Bot logged work on HDFS-16408:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 11:46
Start Date: 05/Jan/22 11:46
Worklog Time Spent: 10m 
  Work Description: liever18 commented on pull request #3856:
URL: https://github.com/apache/hadoop/pull/3856#issuecomment-1005616844


   > This is fine with no unit tests. I will commit this later as it looks good 
now.
   
   thanks for your review. @sodonnel 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 703890)
Time Spent: 3h 10m  (was: 3h)

> Negative LeaseRecheckIntervalMs will let LeaseMonitor loop forever and print 
> huge amount of log
> ---
>
> Key: HDFS-16408
> URL: https://issues.apache.org/jira/browse/HDFS-16408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3, 3.3.1
>Reporter: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> There is a problem with the try catch statement in the LeaseMonitor daemon 
> (in LeaseManager.java), when an unknown exception is caught, it simply prints 
> a warning message and continues with the next loop. 
> An extreme case is when the configuration item 
> 'dfs.namenode.lease-recheck-interval-ms' is accidentally set to a negative 
> number by the user, as the configuration item is read without checking its 
> range, 'fsnamesystem. getLeaseRecheckIntervalMs()' returns this value and is 
> used as an argument to Thread.sleep(). A negative argument will cause 
> Thread.sleep() to throw an IllegalArgumentException, which will be caught by 
> 'catch(Throwable e)' and a warning message will be printed. 
> This behavior is repeated for each subsequent loop. This means that a huge 
> amount of repetitive messages will be printed to the log file in a short 
> period of time, quickly consuming disk space and affecting the operation of 
> the system.
> As you can see, 178M log files are generated in one minute.
>  
> {code:java}
> ll logs/
> total 174456
> drwxrwxr-x  2 hadoop hadoop      4096 1月   3 15:13 ./
> drwxr-xr-x 11 hadoop hadoop      4096 1月   3 15:13 ../
> -rw-rw-r--  1 hadoop hadoop     36342 1月   3 15:14 
> hadoop-hadoop-datanode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      1243 1月   3 15:13 
> hadoop-hadoop-datanode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop 178545466 1月   3 15:14 
> hadoop-hadoop-namenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop       692 1月   3 15:13 
> hadoop-hadoop-namenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop     33201 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      3764 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop         0 1月   3 15:13 SecurityAuth-hadoop.audit
>  
> tail -n 15 logs/hadoop-hadoop-namenode-ljq1.log 
> 2022-01-03 15:14:46,032 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> {code}
>  
> I think there are two potential solutions. 
> The first is to adjust the position of the try catch statement in the 
> LeaseMonitor daemon by moving 'catch(Throwable e)' to the outside 

[jira] [Work logged] (HDFS-16408) Negative LeaseRecheckIntervalMs will let LeaseMonitor loop forever and print huge amount of log

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16408?focusedWorklogId=703888=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703888
 ]

ASF GitHub Bot logged work on HDFS-16408:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 11:44
Start Date: 05/Jan/22 11:44
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on pull request #3856:
URL: https://github.com/apache/hadoop/pull/3856#issuecomment-1005615676


   This is fine with no unit tests. I will commit this later as it looks good 
now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 703888)
Time Spent: 3h  (was: 2h 50m)

> Negative LeaseRecheckIntervalMs will let LeaseMonitor loop forever and print 
> huge amount of log
> ---
>
> Key: HDFS-16408
> URL: https://issues.apache.org/jira/browse/HDFS-16408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3, 3.3.1
>Reporter: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> There is a problem with the try catch statement in the LeaseMonitor daemon 
> (in LeaseManager.java), when an unknown exception is caught, it simply prints 
> a warning message and continues with the next loop. 
> An extreme case is when the configuration item 
> 'dfs.namenode.lease-recheck-interval-ms' is accidentally set to a negative 
> number by the user, as the configuration item is read without checking its 
> range, 'fsnamesystem. getLeaseRecheckIntervalMs()' returns this value and is 
> used as an argument to Thread.sleep(). A negative argument will cause 
> Thread.sleep() to throw an IllegalArgumentException, which will be caught by 
> 'catch(Throwable e)' and a warning message will be printed. 
> This behavior is repeated for each subsequent loop. This means that a huge 
> amount of repetitive messages will be printed to the log file in a short 
> period of time, quickly consuming disk space and affecting the operation of 
> the system.
> As you can see, 178M log files are generated in one minute.
>  
> {code:java}
> ll logs/
> total 174456
> drwxrwxr-x  2 hadoop hadoop      4096 1月   3 15:13 ./
> drwxr-xr-x 11 hadoop hadoop      4096 1月   3 15:13 ../
> -rw-rw-r--  1 hadoop hadoop     36342 1月   3 15:14 
> hadoop-hadoop-datanode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      1243 1月   3 15:13 
> hadoop-hadoop-datanode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop 178545466 1月   3 15:14 
> hadoop-hadoop-namenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop       692 1月   3 15:13 
> hadoop-hadoop-namenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop     33201 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      3764 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop         0 1月   3 15:13 SecurityAuth-hadoop.audit
>  
> tail -n 15 logs/hadoop-hadoop-namenode-ljq1.log 
> 2022-01-03 15:14:46,032 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> {code}
>  
> I think there are two potential solutions. 
> The first is to adjust the position of the try catch statement in the 
> LeaseMonitor daemon by moving 'catch(Throwable e)' to the outside of the loop 
> body. This can be done like the 

[jira] [Work logged] (HDFS-16408) Negative LeaseRecheckIntervalMs will let LeaseMonitor loop forever and print huge amount of log

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16408?focusedWorklogId=703880=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703880
 ]

ASF GitHub Bot logged work on HDFS-16408:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 11:19
Start Date: 05/Jan/22 11:19
Worklog Time Spent: 10m 
  Work Description: liever18 commented on pull request #3856:
URL: https://github.com/apache/hadoop/pull/3856#issuecomment-1005598690


   @sodonnel 
   In my opinion, there is no need to add additional unit tests to verify the 
patch. 
   What is your suggestion?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 703880)
Time Spent: 2h 50m  (was: 2h 40m)

> Negative LeaseRecheckIntervalMs will let LeaseMonitor loop forever and print 
> huge amount of log
> ---
>
> Key: HDFS-16408
> URL: https://issues.apache.org/jira/browse/HDFS-16408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3, 3.3.1
>Reporter: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 1h
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> There is a problem with the try catch statement in the LeaseMonitor daemon 
> (in LeaseManager.java), when an unknown exception is caught, it simply prints 
> a warning message and continues with the next loop. 
> An extreme case is when the configuration item 
> 'dfs.namenode.lease-recheck-interval-ms' is accidentally set to a negative 
> number by the user, as the configuration item is read without checking its 
> range, 'fsnamesystem. getLeaseRecheckIntervalMs()' returns this value and is 
> used as an argument to Thread.sleep(). A negative argument will cause 
> Thread.sleep() to throw an IllegalArgumentException, which will be caught by 
> 'catch(Throwable e)' and a warning message will be printed. 
> This behavior is repeated for each subsequent loop. This means that a huge 
> amount of repetitive messages will be printed to the log file in a short 
> period of time, quickly consuming disk space and affecting the operation of 
> the system.
> As you can see, 178M log files are generated in one minute.
>  
> {code:java}
> ll logs/
> total 174456
> drwxrwxr-x  2 hadoop hadoop      4096 1月   3 15:13 ./
> drwxr-xr-x 11 hadoop hadoop      4096 1月   3 15:13 ../
> -rw-rw-r--  1 hadoop hadoop     36342 1月   3 15:14 
> hadoop-hadoop-datanode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      1243 1月   3 15:13 
> hadoop-hadoop-datanode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop 178545466 1月   3 15:14 
> hadoop-hadoop-namenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop       692 1月   3 15:13 
> hadoop-hadoop-namenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop     33201 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.log
> -rw-rw-r--  1 hadoop hadoop      3764 1月   3 15:14 
> hadoop-hadoop-secondarynamenode-ljq1.out
> -rw-rw-r--  1 hadoop hadoop         0 1月   3 15:13 SecurityAuth-hadoop.audit
>  
> tail -n 15 logs/hadoop-hadoop-namenode-ljq1.log 
> 2022-01-03 15:14:46,032 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2022-01-03 15:14:46,033 WARN 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Unexpected throwable: 
> java.lang.IllegalArgumentException: timeout value is negative
>         at java.base/java.lang.Thread.sleep(Native Method)
>         at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:534)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> {code}
>  
> I think there are two potential solutions. 
> The first is to adjust the position of the try catch statement in the 
> LeaseMonitor daemon by moving 'catch(Throwable e)' to the 

[jira] [Work logged] (HDFS-16408) Negative LeaseRecheckIntervalMs will let LeaseMonitor loop forever and print huge amount of log

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16408?focusedWorklogId=703877=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703877
 ]

ASF GitHub Bot logged work on HDFS-16408:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 11:15
Start Date: 05/Jan/22 11:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3856:
URL: https://github.com/apache/hadoop/pull/3856#issuecomment-1005596533


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 226m 40s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 335m 14s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3856/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3856 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux f900077faa71 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4cdd250871097f9990f6754d5b8150bdb77e9a58 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3856/1/testReport/ |
   | Max. process+thread count | 3226 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3856/1/console |
   | versions | 

[jira] [Work started] (HDFS-16316) Improve DirectoryScanner: add regular file check related block

2022-01-05 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16316 started by JiangHua Zhu.
---
> Improve DirectoryScanner: add regular file check related block
> --
>
> Key: HDFS-16316
> URL: https://issues.apache.org/jira/browse/HDFS-16316
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Something unusual happened in the online environment.
> The DataNode is configured with 11 disks (${dfs.datanode.data.dir}). It is 
> normal for 10 disks to calculate the used capacity, and the calculated value 
> for the other 1 disk is much larger, which is very strange.
> This is about the live view on the NameNode:
>  !screenshot-1.png! 
> This is about the live view on the DataNode:
>  !screenshot-2.png! 
> We can look at the view on linux:
>  !screenshot-3.png! 
> There is a big gap here, regarding'/mnt/dfs/11/data'. This situation should 
> be prohibited from happening.
> I found that there are some abnormal block files.
> There are wrong blk_.meta in some subdir directories, causing abnormal 
> computing space.
> Here are some abnormal block files:
>  !screenshot-4.png! 
> Such files should not be used as normal blocks. They should be actively 
> identified and filtered, which is good for cluster stability.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16316) Improve DirectoryScanner: add regular file check related block

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16316?focusedWorklogId=703854=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703854
 ]

ASF GitHub Bot logged work on HDFS-16316:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 10:22
Start Date: 05/Jan/22 10:22
Worklog Time Spent: 10m 
  Work Description: jianghuazhu opened a new pull request #3861:
URL: https://github.com/apache/hadoop/pull/3861


   
   
   ### Description of PR
   When blk_ and blk_.meta are not regular files, they will have 
adverse effects on the cluster, such as errors in the calculation space and the 
possibility of failure to read data.
   For this type of block, it should not be used as a normal block file.
   Details:
   HDFS-16316
   
   
   ### How was this patch tested?
   Need to verify whether a file is a real regular file.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 703854)
Remaining Estimate: 0h
Time Spent: 10m

> Improve DirectoryScanner: add regular file check related block
> --
>
> Key: HDFS-16316
> URL: https://issues.apache.org/jira/browse/HDFS-16316
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Something unusual happened in the online environment.
> The DataNode is configured with 11 disks (${dfs.datanode.data.dir}). It is 
> normal for 10 disks to calculate the used capacity, and the calculated value 
> for the other 1 disk is much larger, which is very strange.
> This is about the live view on the NameNode:
>  !screenshot-1.png! 
> This is about the live view on the DataNode:
>  !screenshot-2.png! 
> We can look at the view on linux:
>  !screenshot-3.png! 
> There is a big gap here, regarding'/mnt/dfs/11/data'. This situation should 
> be prohibited from happening.
> I found that there are some abnormal block files.
> There are wrong blk_.meta in some subdir directories, causing abnormal 
> computing space.
> Here are some abnormal block files:
>  !screenshot-4.png! 
> Such files should not be used as normal blocks. They should be actively 
> identified and filtered, which is good for cluster stability.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16316) Improve DirectoryScanner: add regular file check related block

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16316:
--
Labels: pull-request-available  (was: )

> Improve DirectoryScanner: add regular file check related block
> --
>
> Key: HDFS-16316
> URL: https://issues.apache.org/jira/browse/HDFS-16316
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Something unusual happened in the online environment.
> The DataNode is configured with 11 disks (${dfs.datanode.data.dir}). It is 
> normal for 10 disks to calculate the used capacity, and the calculated value 
> for the other 1 disk is much larger, which is very strange.
> This is about the live view on the NameNode:
>  !screenshot-1.png! 
> This is about the live view on the DataNode:
>  !screenshot-2.png! 
> We can look at the view on linux:
>  !screenshot-3.png! 
> There is a big gap here, regarding'/mnt/dfs/11/data'. This situation should 
> be prohibited from happening.
> I found that there are some abnormal block files.
> There are wrong blk_.meta in some subdir directories, causing abnormal 
> computing space.
> Here are some abnormal block files:
>  !screenshot-4.png! 
> Such files should not be used as normal blocks. They should be actively 
> identified and filtered, which is good for cluster stability.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16316) Improve DirectoryScanner: add regular file check related block

2022-01-05 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu updated HDFS-16316:

Description: 
Something unusual happened in the online environment.
The DataNode is configured with 11 disks (${dfs.datanode.data.dir}). It is 
normal for 10 disks to calculate the used capacity, and the calculated value 
for the other 1 disk is much larger, which is very strange.
This is about the live view on the NameNode:
 !screenshot-1.png! 

This is about the live view on the DataNode:
 !screenshot-2.png! 

We can look at the view on linux:
 !screenshot-3.png! 
There is a big gap here, regarding'/mnt/dfs/11/data'. This situation should be 
prohibited from happening.

I found that there are some abnormal block files.
There are wrong blk_.meta in some subdir directories, causing abnormal 
computing space.
Here are some abnormal block files:
 !screenshot-4.png! 

Such files should not be used as normal blocks. They should be actively 
identified and filtered, which is good for cluster stability.

  was:
Something unusual happened in the online environment.
The DataNode is configured with 11 disks (${dfs.datanode.data.dir}). It is 
normal for 10 disks to calculate the used capacity, and the calculated value 
for the other 1 disk is much larger, which is very strange.
This is about the live view on the NameNode:
 !screenshot-1.png! 

This is about the live view on the DataNode:
 !screenshot-2.png! 

We can look at the view on linux:
 !screenshot-3.png! 
There is a big gap here, regarding'/mnt/dfs/11/data'. This situation should be 
prohibited from happening.

I found that there are some abnormal block files.
There are wrong blk_.meta in some subdir directories, causing abnormal 
computing space.
Here are some abnormal block files:
 !screenshot-4.png! 


> Improve DirectoryScanner: add regular file check related block
> --
>
> Key: HDFS-16316
> URL: https://issues.apache.org/jira/browse/HDFS-16316
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>
> Something unusual happened in the online environment.
> The DataNode is configured with 11 disks (${dfs.datanode.data.dir}). It is 
> normal for 10 disks to calculate the used capacity, and the calculated value 
> for the other 1 disk is much larger, which is very strange.
> This is about the live view on the NameNode:
>  !screenshot-1.png! 
> This is about the live view on the DataNode:
>  !screenshot-2.png! 
> We can look at the view on linux:
>  !screenshot-3.png! 
> There is a big gap here, regarding'/mnt/dfs/11/data'. This situation should 
> be prohibited from happening.
> I found that there are some abnormal block files.
> There are wrong blk_.meta in some subdir directories, causing abnormal 
> computing space.
> Here are some abnormal block files:
>  !screenshot-4.png! 
> Such files should not be used as normal blocks. They should be actively 
> identified and filtered, which is good for cluster stability.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16403) Improve FUSE IO performance by supporting FUSE parameter max_background

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16403?focusedWorklogId=703836=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703836
 ]

ASF GitHub Bot logged work on HDFS-16403:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 09:30
Start Date: 05/Jan/22 09:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3842:
URL: https://github.com/apache/hadoop/pull/3842#issuecomment-1005519756


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  43m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 22s |  |  
branch/hadoop-hdfs-project/hadoop-hdfs-native-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  21m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  5s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m  5s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m  5s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  |  the patch passed  |
   | +0 :ok: |  spotbugs  |   0m 13s |  |  
hadoop-hdfs-project/hadoop-hdfs-native-client has no data from spotbugs  |
   | +1 :green_heart: |  shadedclient  |  21m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  82m 55s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 217m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3842 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell shellcheck shelldocs cc golang spotbugs 
checkstyle |
   | uname | Linux 7eeaf232f40d 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae8786d87220bc2ef8eb1725fdab63d19f20c6c0 |
   | Default Java | Red Hat, Inc.-1.8.0_312-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/3/testReport/ |
   | Max. process+thread count | 630 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/3/console |
   | versions | git=2.9.5 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 703836)
Time Spent: 1.5h  (was: 1h 20m)

> Improve FUSE IO performance by supporting FUSE parameter 

[jira] [Work logged] (HDFS-16403) Improve FUSE IO performance by supporting FUSE parameter max_background

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16403?focusedWorklogId=703821=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703821
 ]

ASF GitHub Bot logged work on HDFS-16403:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 09:01
Start Date: 05/Jan/22 09:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3842:
URL: https://github.com/apache/hadoop/pull/3842#issuecomment-1005498412


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shellcheck  |   0m  0s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 10s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  
branch/hadoop-hdfs-project/hadoop-hdfs-native-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  19m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 12s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed  |
   | +0 :ok: |  spotbugs  |   0m 16s |  |  
hadoop-hdfs-project/hadoop-hdfs-native-client has no data from spotbugs  |
   | +1 :green_heart: |  shadedclient  |  18m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 104m 43s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 188m 17s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3842 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell shellcheck shelldocs cc golang spotbugs 
checkstyle |
   | uname | Linux 16cf458787de 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae8786d87220bc2ef8eb1725fdab63d19f20c6c0 |
   | Default Java | Red Hat, Inc.-1.8.0_312-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/4/testReport/ |
   | Max. process+thread count | 548 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3842/4/console |
   | versions | git=2.9.5 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 703821)
Time Spent: 1h 20m  (was: 1h 10m)

> Improve FUSE IO performance by supporting FUSE 

[jira] [Updated] (HDFS-16412) Add metrics to support obtaining file size distribution

2022-01-05 Thread Xiangyi Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiangyi Zhu updated HDFS-16412:
---
Description: 
Use RangeMapRange "Map fileSizeRange" to store counters at 
different intervals. RangeMap key is a specific interval, and value is the 
counter corresponding to the interval.

*Counter update:*
When the file size changes or the file is deleted, the file size is obtained, 
and the counter in the corresponding interval is called to update the counter.

*Interval division:*
The default is to initialize the startup according to the following interval, 
or it can be initialized through the configuration file.
0MB
0-16MB
16-32MB
32-64MB
64-128MB
128-256MB
256-512MB
>512MB

  was:
Use RangeMapRange "Map fileSizeRange" to store counters at 
different intervals. RangeMap key is a specific interval, and value is the 
counter corresponding to the interval.
**

*Counter update:*
When the file size changes or the file is deleted, the file size is obtained, 
and the counter in the corresponding interval is called to update the counter.
**

*Interval division:*
The default is to initialize the startup according to the following interval, 
or it can be initialized through the configuration file.
0MB
0-16MB
16-32MB
32-64MB
64-128MB
128-256MB
256-512MB
>512MB


> Add metrics to support obtaining file size distribution
> ---
>
> Key: HDFS-16412
> URL: https://issues.apache.org/jira/browse/HDFS-16412
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: Xiangyi Zhu
>Assignee: Xiangyi Zhu
>Priority: Minor
>
> Use RangeMapRange "Map fileSizeRange" to store counters at 
> different intervals. RangeMap key is a specific interval, and value is the 
> counter corresponding to the interval.
> *Counter update:*
> When the file size changes or the file is deleted, the file size is obtained, 
> and the counter in the corresponding interval is called to update the counter.
> *Interval division:*
> The default is to initialize the startup according to the following interval, 
> or it can be initialized through the configuration file.
> 0MB
> 0-16MB
> 16-32MB
> 32-64MB
> 64-128MB
> 128-256MB
> 256-512MB
> >512MB



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16412) Add metrics to support obtaining file size distribution

2022-01-05 Thread Xiangyi Zhu (Jira)
Xiangyi Zhu created HDFS-16412:
--

 Summary: Add metrics to support obtaining file size distribution
 Key: HDFS-16412
 URL: https://issues.apache.org/jira/browse/HDFS-16412
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.4.0
Reporter: Xiangyi Zhu
Assignee: Xiangyi Zhu


Use RangeMapRange "Map fileSizeRange" to store counters at 
different intervals. RangeMap key is a specific interval, and value is the 
counter corresponding to the interval.
**

*Counter update:*
When the file size changes or the file is deleted, the file size is obtained, 
and the counter in the corresponding interval is called to update the counter.
**

*Interval division:*
The default is to initialize the startup according to the following interval, 
or it can be initialized through the configuration file.
0MB
0-16MB
16-32MB
32-64MB
64-128MB
128-256MB
256-512MB
>512MB



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16410) Insecure Xml parsing in OfflineEditsXmlLoader

2022-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16410?focusedWorklogId=703806=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-703806
 ]

ASF GitHub Bot logged work on HDFS-16410:
-

Author: ASF GitHub Bot
Created on: 05/Jan/22 08:32
Start Date: 05/Jan/22 08:32
Worklog Time Spent: 10m 
  Work Description: ashutoshcipher commented on pull request #3854:
URL: https://github.com/apache/hadoop/pull/3854#issuecomment-1005480432


   > Makes sense. are there other places in the code with the same issue? If so 
it would probably be best to have something in hadoop-common so it could be 
done consistently everywhere.
   
   Thanks for the review @steveloughran. I checked the complete codebase, 
didn't have this issue anywhere else. I think we can keep the changes in 
OfflineEditsXmlLoader.java.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 703806)
Time Spent: 0.5h  (was: 20m)

> Insecure Xml parsing in OfflineEditsXmlLoader 
> --
>
> Key: HDFS-16410
> URL: https://issues.apache.org/jira/browse/HDFS-16410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Ashutosh Gupta
>Assignee: Ashutosh Gupta
>Priority: Minor
>  Labels: pull-request-available, security
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Insecure Xml parsing in OfflineEditsXmlLoader 
> [https://github.com/apache/hadoop/blob/03cfc852791c14fad39db4e5b14104a276c08e59/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsXmlLoader.java#L88]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org