[jira] [Work logged] (HDFS-16175) Improve the configurable value of Server #PURGE_INTERVAL_NANOS

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16175?focusedWorklogId=638977=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638977
 ]

ASF GitHub Bot logged work on HDFS-16175:
-

Author: ASF GitHub Bot
Created on: 18/Aug/21 05:47
Start Date: 18/Aug/21 05:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3307:
URL: https://github.com/apache/hadoop/pull/3307#issuecomment-900829282


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  20m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 56s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3307/4/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 282 
unchanged - 0 fixed = 283 total (was 282)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  8s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 193m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3307/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3307 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux eed726a7a1af 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dc4ad1a97276f0cf62a4e863d951a5be501ae0ec |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3307/4/testReport/ |
   | Max. 

[jira] [Work logged] (HDFS-16173) Improve CopyCommands#Put#executor queue configurability

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16173?focusedWorklogId=638973=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638973
 ]

ASF GitHub Bot logged work on HDFS-16173:
-

Author: ASF GitHub Bot
Created on: 18/Aug/21 03:36
Start Date: 18/Aug/21 03:36
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on a change in pull request #3302:
URL: https://github.com/apache/hadoop/pull/3302#discussion_r690875212



##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
##
@@ -542,6 +542,8 @@ Options:
 * `-l` : Allow DataNode to lazily persist the file to disk, Forces a 
replication
  factor of 1. This flag will result in reduced durability. Use with care.
 * `-d` : Skip creation of temporary file with the suffix `._COPYING_`.
+* `-q ` : Number of threadPool queue size to be used, 

Review comment:
   Thanks to @jojochuang  comment, I will make some changes later.

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
##
@@ -256,6 +258,8 @@ protected void processOptions(LinkedList args)
 "  -p : Preserves timestamps, ownership and the mode.\n" +
 "  -f : Overwrites the destination if it already exists.\n" +
 "  -t  : Number of threads to be used, default is 1.\n" +
+"  -q  : Number of threadPool queue size to be used, 
" +

Review comment:
   Thanks to @jojochuang  comment, I will make some changes later.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638973)
Time Spent: 1h 40m  (was: 1.5h)

> Improve CopyCommands#Put#executor queue configurability
> ---
>
> Key: HDFS-16173
> URL: https://issues.apache.org/jira/browse/HDFS-16173
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In CopyCommands#Put, the number of executor queues is a fixed value, 1024.
> We should make him configurable, because there are different usage 
> environments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16173) Improve CopyCommands#Put#executor queue configurability

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16173?focusedWorklogId=638971=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638971
 ]

ASF GitHub Bot logged work on HDFS-16173:
-

Author: ASF GitHub Bot
Created on: 18/Aug/21 03:27
Start Date: 18/Aug/21 03:27
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3302:
URL: https://github.com/apache/hadoop/pull/3302#discussion_r690872102



##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
##
@@ -542,6 +542,8 @@ Options:
 * `-l` : Allow DataNode to lazily persist the file to disk, Forces a 
replication
  factor of 1. This flag will result in reduced durability. Use with care.
 * `-d` : Skip creation of temporary file with the suffix `._COPYING_`.
+* `-q ` : Number of threadPool queue size to be used, 

Review comment:
   here, too.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638971)
Time Spent: 1.5h  (was: 1h 20m)

> Improve CopyCommands#Put#executor queue configurability
> ---
>
> Key: HDFS-16173
> URL: https://issues.apache.org/jira/browse/HDFS-16173
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In CopyCommands#Put, the number of executor queues is a fixed value, 1024.
> We should make him configurable, because there are different usage 
> environments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16173) Improve CopyCommands#Put#executor queue configurability

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16173?focusedWorklogId=638970=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638970
 ]

ASF GitHub Bot logged work on HDFS-16173:
-

Author: ASF GitHub Bot
Created on: 18/Aug/21 03:26
Start Date: 18/Aug/21 03:26
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3302:
URL: https://github.com/apache/hadoop/pull/3302#discussion_r690871929



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
##
@@ -256,6 +258,8 @@ protected void processOptions(LinkedList args)
 "  -p : Preserves timestamps, ownership and the mode.\n" +
 "  -f : Overwrites the destination if it already exists.\n" +
 "  -t  : Number of threads to be used, default is 1.\n" +
+"  -q  : Number of threadPool queue size to be used, 
" +

Review comment:
   Just a nit: remove "Number of"




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638970)
Time Spent: 1h 20m  (was: 1h 10m)

> Improve CopyCommands#Put#executor queue configurability
> ---
>
> Key: HDFS-16173
> URL: https://issues.apache.org/jira/browse/HDFS-16173
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> In CopyCommands#Put, the number of executor queues is a fixed value, 1024.
> We should make him configurable, because there are different usage 
> environments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16175) Improve the configurable value of Server #PURGE_INTERVAL_NANOS

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16175?focusedWorklogId=638951=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638951
 ]

ASF GitHub Bot logged work on HDFS-16175:
-

Author: ASF GitHub Bot
Created on: 18/Aug/21 01:54
Start Date: 18/Aug/21 01:54
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3307:
URL: https://github.com/apache/hadoop/pull/3307#issuecomment-900751597


   Thanks @jojochuang  for the comment.
   I will make some changes later.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638951)
Time Spent: 1h 40m  (was: 1.5h)

> Improve the configurable value of Server #PURGE_INTERVAL_NANOS
> --
>
> Key: HDFS-16175
> URL: https://issues.apache.org/jira/browse/HDFS-16175
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In Server, Server #PURGE_INTERVAL_NANOS is a fixed value, 15.
> We can try to improve the configurable value of Server #PURGE_INTERVAL_NANOS, 
> which will make RPC more flexible.
> private final static long PURGE_INTERVAL_NANOS = TimeUnit.NANOSECONDS.convert(
>   15, TimeUnit.MINUTES);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16173) Improve CopyCommands#Put#executor queue configurability

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16173?focusedWorklogId=638950=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638950
 ]

ASF GitHub Bot logged work on HDFS-16173:
-

Author: ASF GitHub Bot
Created on: 18/Aug/21 01:49
Start Date: 18/Aug/21 01:49
Worklog Time Spent: 10m 
  Work Description: ferhui commented on pull request #3302:
URL: https://github.com/apache/hadoop/pull/3302#issuecomment-900750019


   @jianghuazhu Thanks for contribution! @virajjasani Thanks for review!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638950)
Time Spent: 1h 10m  (was: 1h)

> Improve CopyCommands#Put#executor queue configurability
> ---
>
> Key: HDFS-16173
> URL: https://issues.apache.org/jira/browse/HDFS-16173
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In CopyCommands#Put, the number of executor queues is a fixed value, 1024.
> We should make him configurable, because there are different usage 
> environments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16175) Improve the configurable value of Server #PURGE_INTERVAL_NANOS

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16175?focusedWorklogId=638930=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638930
 ]

ASF GitHub Bot logged work on HDFS-16175:
-

Author: ASF GitHub Bot
Created on: 18/Aug/21 00:14
Start Date: 18/Aug/21 00:14
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3307:
URL: https://github.com/apache/hadoop/pull/3307#discussion_r690806472



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
##
@@ -494,6 +494,10 @@
 "ipc.server.log.slow.rpc";
   public static final boolean IPC_SERVER_LOG_SLOW_RPC_DEFAULT = false;
 
+  public static final String IPC_SERVER_PURGE_INTERVAL_NANOS_MINUTES_KEY =

Review comment:
   sorry missed this one:
   we should rename this variable. It is in minutes and nothing to do with 
nanoseconds.
   Call it IPC_SERVER_PURGE_INTERVAL_MINUTES_KEY? same for 
IPC_SERVER_PURGE_INTERVAL_NANOS_DEFAULT




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638930)
Time Spent: 1.5h  (was: 1h 20m)

> Improve the configurable value of Server #PURGE_INTERVAL_NANOS
> --
>
> Key: HDFS-16175
> URL: https://issues.apache.org/jira/browse/HDFS-16175
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In Server, Server #PURGE_INTERVAL_NANOS is a fixed value, 15.
> We can try to improve the configurable value of Server #PURGE_INTERVAL_NANOS, 
> which will make RPC more flexible.
> private final static long PURGE_INTERVAL_NANOS = TimeUnit.NANOSECONDS.convert(
>   15, TimeUnit.MINUTES);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15966) Empty the statistical parameters when emptying the redundant queue

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15966?focusedWorklogId=638900=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638900
 ]

ASF GitHub Bot logged work on HDFS-15966:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 21:42
Start Date: 17/Aug/21 21:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2894:
URL: https://github.com/apache/hadoop/pull/2894#issuecomment-900650739


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 370m 27s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2894/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 455m 24s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2894/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2894 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux f1b7a9daaed3 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c343a52432d864e63f39bfde136d8a2cbe89abcb |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[jira] [Work logged] (HDFS-15939) Solve the problem that DataXceiverServer#run() does not record SocketTimeout exception

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15939?focusedWorklogId=638805=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638805
 ]

ASF GitHub Bot logged work on HDFS-15939:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 18:24
Start Date: 17/Aug/21 18:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2841:
URL: https://github.com/apache/hadoop/pull/2841#issuecomment-900530473


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 233m 38s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2841/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 317m 58s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2841/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2841 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ab27bc0bdc27 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 
23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b0ed73569c97b4d7c225bdae5b1a1f3011fef08f |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[jira] [Resolved] (HDFS-16174) Refactor TempFile and TempDir in libhdfs++

2021-08-17 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-16174.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Refactor TempFile and TempDir in libhdfs++
> --
>
> Key: HDFS-16174
> URL: https://issues.apache.org/jira/browse/HDFS-16174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In C++, we generally do the declaration in the header files and the 
> corresponding implementation in the .cc files. Here we see that the 
> implementation of TempFile and TempDir are done in configuration_test.h 
> itself. This offers no benefit and the compilation of TempFile and TempDir 
> classes are duplicated for every #include of the configuration_test.h header. 
> Thus, we need to implement it in separate cc files to avoid this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16174) Refactor TempFile and TempDir in libhdfs++

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16174?focusedWorklogId=638749=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638749
 ]

ASF GitHub Bot logged work on HDFS-16174:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 16:50
Start Date: 17/Aug/21 16:50
Worklog Time Spent: 10m 
  Work Description: goiri merged pull request #3303:
URL: https://github.com/apache/hadoop/pull/3303


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638749)
Time Spent: 1.5h  (was: 1h 20m)

> Refactor TempFile and TempDir in libhdfs++
> --
>
> Key: HDFS-16174
> URL: https://issues.apache.org/jira/browse/HDFS-16174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In C++, we generally do the declaration in the header files and the 
> corresponding implementation in the .cc files. Here we see that the 
> implementation of TempFile and TempDir are done in configuration_test.h 
> itself. This offers no benefit and the compilation of TempFile and TempDir 
> classes are duplicated for every #include of the configuration_test.h header. 
> Thus, we need to implement it in separate cc files to avoid this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16174) Refactor TempFile and TempDir in libhdfs++

2021-08-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400486#comment-17400486
 ] 

Íñigo Goiri commented on HDFS-16174:


Thanks [~gautham] for the refactor.
Merged PR 3303 to trunk.

> Refactor TempFile and TempDir in libhdfs++
> --
>
> Key: HDFS-16174
> URL: https://issues.apache.org/jira/browse/HDFS-16174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In C++, we generally do the declaration in the header files and the 
> corresponding implementation in the .cc files. Here we see that the 
> implementation of TempFile and TempDir are done in configuration_test.h 
> itself. This offers no benefit and the compilation of TempFile and TempDir 
> classes are duplicated for every #include of the configuration_test.h header. 
> Thus, we need to implement it in separate cc files to avoid this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16175) Improve the configurable value of Server #PURGE_INTERVAL_NANOS

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16175?focusedWorklogId=638739=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638739
 ]

ASF GitHub Bot logged work on HDFS-16175:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 16:31
Start Date: 17/Aug/21 16:31
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3307:
URL: https://github.com/apache/hadoop/pull/3307#issuecomment-900448160


   Thanks @jojochuang for the comment.
   I have submitted some new codes, could you please review them?
   Thank you very much.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638739)
Time Spent: 1h 20m  (was: 1h 10m)

> Improve the configurable value of Server #PURGE_INTERVAL_NANOS
> --
>
> Key: HDFS-16175
> URL: https://issues.apache.org/jira/browse/HDFS-16175
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> In Server, Server #PURGE_INTERVAL_NANOS is a fixed value, 15.
> We can try to improve the configurable value of Server #PURGE_INTERVAL_NANOS, 
> which will make RPC more flexible.
> private final static long PURGE_INTERVAL_NANOS = TimeUnit.NANOSECONDS.convert(
>   15, TimeUnit.MINUTES);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16157) Support configuring DNS record to get list of journal nodes.

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16157?focusedWorklogId=638718=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638718
 ]

ASF GitHub Bot logged work on HDFS-16157:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 16:02
Start Date: 17/Aug/21 16:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3284:
URL: https://github.com/apache/hadoop/pull/3284#issuecomment-900425151


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 41s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  20m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 10s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  21m  1s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 27s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m  0s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 14s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 364m  0s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3284/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 586m 45s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdminWithHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3284/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3284 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml markdownlint |
   | uname | Linux 9ae600bda287 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 376eba3da73e2a8def032736312a444e0166f175 |
   | Default Java | Private 

[jira] [Work logged] (HDFS-16175) Improve the configurable value of Server #PURGE_INTERVAL_NANOS

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16175?focusedWorklogId=638700=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638700
 ]

ASF GitHub Bot logged work on HDFS-16175:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 15:13
Start Date: 17/Aug/21 15:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3307:
URL: https://github.com/apache/hadoop/pull/3307#issuecomment-900385813


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 59s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3307/3/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 282 
unchanged - 0 fixed = 283 total (was 282)  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 40s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 191m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3307/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3307 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 372f42a0f2de 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a818ea8f61df5c7ba46c52540905db2ca414303f |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3307/3/testReport/ |
   | Max. 

[jira] [Work logged] (HDFS-16175) Improve the configurable value of Server #PURGE_INTERVAL_NANOS

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16175?focusedWorklogId=638692=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638692
 ]

ASF GitHub Bot logged work on HDFS-16175:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 14:59
Start Date: 17/Aug/21 14:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3307:
URL: https://github.com/apache/hadoop/pull/3307#issuecomment-900374339


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  8s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3307/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 282 
unchanged - 0 fixed = 283 total (was 282)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 47s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 184m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3307/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3307 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux eaf8029a3eb7 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7accd25129849534ebe17f85ce8117ce7f20b141 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3307/2/testReport/ |
   | Max. 

[jira] [Updated] (HDFS-15939) Solve the problem that DataXceiverServer#run() does not record SocketTimeout exception

2021-08-17 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu updated HDFS-15939:

Component/s: datanode

> Solve the problem that DataXceiverServer#run() does not record SocketTimeout 
> exception
> --
>
> Key: HDFS-15939
> URL: https://issues.apache.org/jira/browse/HDFS-15939
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In DataXceiverServer#run(), if a SocketTimeout exception occurs, no 
> information will be recorded here.
> try {
>  ..
> } catch (SocketTimeoutException ignored){
>  // wake up to see if should continue to run
> }
> No records are generated here, which is not conducive to troubleshooting.
> We should add some warning type logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15939) Solve the problem that DataXceiverServer#run() does not record SocketTimeout exception

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15939?focusedWorklogId=638645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638645
 ]

ASF GitHub Bot logged work on HDFS-15939:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 13:16
Start Date: 17/Aug/21 13:16
Worklog Time Spent: 10m 
  Work Description: jianghuazhu edited a comment on pull request #2841:
URL: https://github.com/apache/hadoop/pull/2841#issuecomment-900280620


   Sorry, I ignored this question for a while. I will continue to work.
   @liuml07 , thanks for your comment.
   I very much agree with your suggestion, so I made two improvements:
   1. Rename the ignored variable to ste.
   2. When this exception occurs, close the peer. because this exception does 
not seem simple.
   If there are other suggestions, please discuss them.
   thank you very much.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638645)
Time Spent: 1h 10m  (was: 1h)

> Solve the problem that DataXceiverServer#run() does not record SocketTimeout 
> exception
> --
>
> Key: HDFS-15939
> URL: https://issues.apache.org/jira/browse/HDFS-15939
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In DataXceiverServer#run(), if a SocketTimeout exception occurs, no 
> information will be recorded here.
> try {
>  ..
> } catch (SocketTimeoutException ignored){
>  // wake up to see if should continue to run
> }
> No records are generated here, which is not conducive to troubleshooting.
> We should add some warning type logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15939) Solve the problem that DataXceiverServer#run() does not record SocketTimeout exception

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15939?focusedWorklogId=638644=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638644
 ]

ASF GitHub Bot logged work on HDFS-15939:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 13:15
Start Date: 17/Aug/21 13:15
Worklog Time Spent: 10m 
  Work Description: jianghuazhu edited a comment on pull request #2841:
URL: https://github.com/apache/hadoop/pull/2841#issuecomment-900280620


   Sorry, I ignored this question for a while. I will continue to work.
   @liuml07  , thanks for your comment.
   I very much agree with your opinion, so I made two improvements:
   Rename the ignored variable to ste;
   When this exception occurs, close the peer; because this exception does not 
seem simple.
   If there are other suggestions, please discuss them.
   thank you very much.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638644)
Time Spent: 1h  (was: 50m)

> Solve the problem that DataXceiverServer#run() does not record SocketTimeout 
> exception
> --
>
> Key: HDFS-15939
> URL: https://issues.apache.org/jira/browse/HDFS-15939
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In DataXceiverServer#run(), if a SocketTimeout exception occurs, no 
> information will be recorded here.
> try {
>  ..
> } catch (SocketTimeoutException ignored){
>  // wake up to see if should continue to run
> }
> No records are generated here, which is not conducive to troubleshooting.
> We should add some warning type logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15939) Solve the problem that DataXceiverServer#run() does not record SocketTimeout exception

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15939?focusedWorklogId=638642=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638642
 ]

ASF GitHub Bot logged work on HDFS-15939:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 13:05
Start Date: 17/Aug/21 13:05
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #2841:
URL: https://github.com/apache/hadoop/pull/2841#issuecomment-900280620


   Sorry, I ignored this question for a while. I will continue to work.
   @liuml07 , thanks for your comment.
   I very much agree with your opinion, so I made two improvements:
   1. Rename the ignored variable to ste;
   2. When this exception occurs, close the peer; because this exception does 
not seem simple.
   If there are other suggestions, please discuss them.
   thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638642)
Time Spent: 50m  (was: 40m)

> Solve the problem that DataXceiverServer#run() does not record SocketTimeout 
> exception
> --
>
> Key: HDFS-15939
> URL: https://issues.apache.org/jira/browse/HDFS-15939
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In DataXceiverServer#run(), if a SocketTimeout exception occurs, no 
> information will be recorded here.
> try {
>  ..
> } catch (SocketTimeoutException ignored){
>  // wake up to see if should continue to run
> }
> No records are generated here, which is not conducive to troubleshooting.
> We should add some warning type logs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16158) Discover datanodes with unbalanced volume usage by the standard deviation

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16158?focusedWorklogId=638618=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638618
 ]

ASF GitHub Bot logged work on HDFS-16158:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 11:38
Start Date: 17/Aug/21 11:38
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3288:
URL: https://github.com/apache/hadoop/pull/3288#issuecomment-900221653


   Thanks @jojochuang for your review, I will fix these problems ASAP.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638618)
Time Spent: 1.5h  (was: 1h 20m)

> Discover datanodes with unbalanced volume usage by the standard deviation 
> --
>
> Key: HDFS-16158
> URL: https://issues.apache.org/jira/browse/HDFS-16158
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-08-11-10-14-58-430.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Discover datanodes with unbalanced volume usage by the standard deviation
> In some scenarios, we may cause unbalanced datanode disk usage:
> 1. Repair the damaged disk and make it online again.
> 2. Add disks to some Datanodes.
> 3. Some disks are damaged, resulting in slow data writing.
> 4. Use some custom volume choosing policies.
> In the case of unbalanced disk usage, a sudden increase in datanode write 
> traffic may result in busy disk I/O with low volume usage, resulting in 
> decreased throughput across datanodes.
> In this case, we need to find these nodes in time to do diskBalance, or other 
> processing. Based on the volume usage of each datanode, we can calculate the 
> standard deviation of the volume usage. The more unbalanced the volume, the 
> higher the standard deviation.
> To prevent the namenode from being too busy, we can calculate the standard 
> variance on the datanode side, transmit it to the namenode through heartbeat, 
> and display the result on the Web of namenode. We can then sort directly to 
> find the nodes on the Web where the volumes usages are unbalanced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16158) Discover datanodes with unbalanced volume usage by the standard deviation

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16158?focusedWorklogId=638617=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638617
 ]

ASF GitHub Bot logged work on HDFS-16158:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 11:29
Start Date: 17/Aug/21 11:29
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3288:
URL: https://github.com/apache/hadoop/pull/3288#discussion_r690277334



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
##
@@ -3184,6 +3191,26 @@ public void shutdownBlockPool(String bpid) {
 return info;
   }
 
+  @Override
+  public float getVolumeUsageStdDev() {
+Collection volumeInfos = getVolumeInfo();
+ArrayList usages = new ArrayList();
+float totalDfsUsed = 0;
+float dev = 0;
+for (VolumeInfo v : volumeInfos) {
+  usages.add(v.volumeUsagePercent);
+  totalDfsUsed += v.volumeUsagePercent;
+}
+
+totalDfsUsed /= volumeInfos.size();
+Collections.sort(usages);
+for (Float usage : usages) {
+  dev += (usage - totalDfsUsed) * (usage - totalDfsUsed);
+}
+dev = (float) Math.sqrt(dev / usages.size());

Review comment:
   can we add a check to ensure usages.size() never returns 0?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
##
@@ -675,6 +675,10 @@ public long getDfsUsed() throws IOException {
 return volumes.getDfsUsed();
   }
 
+  public long setDfsUsed() throws IOException {

Review comment:
   this API doesn't appear to be used.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -6515,15 +6518,16 @@ public String getLiveNodes() {
   .put("nonDfsUsedSpace", node.getNonDfsUsed())
   .put("capacity", node.getCapacity())
   .put("numBlocks", node.numBlocks())
-  .put("version", node.getSoftwareVersion())
+  .put("version", "")

Review comment:
   why is this removed?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638617)
Time Spent: 1h 20m  (was: 1h 10m)

> Discover datanodes with unbalanced volume usage by the standard deviation 
> --
>
> Key: HDFS-16158
> URL: https://issues.apache.org/jira/browse/HDFS-16158
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-08-11-10-14-58-430.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Discover datanodes with unbalanced volume usage by the standard deviation
> In some scenarios, we may cause unbalanced datanode disk usage:
> 1. Repair the damaged disk and make it online again.
> 2. Add disks to some Datanodes.
> 3. Some disks are damaged, resulting in slow data writing.
> 4. Use some custom volume choosing policies.
> In the case of unbalanced disk usage, a sudden increase in datanode write 
> traffic may result in busy disk I/O with low volume usage, resulting in 
> decreased throughput across datanodes.
> In this case, we need to find these nodes in time to do diskBalance, or other 
> processing. Based on the volume usage of each datanode, we can calculate the 
> standard deviation of the volume usage. The more unbalanced the volume, the 
> higher the standard deviation.
> To prevent the namenode from being too busy, we can calculate the standard 
> variance on the datanode side, transmit it to the namenode through heartbeat, 
> and display the result on the Web of namenode. We can then sort directly to 
> find the nodes on the Web where the volumes usages are unbalanced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16175) Improve the configurable value of Server #PURGE_INTERVAL_NANOS

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16175?focusedWorklogId=638612=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638612
 ]

ASF GitHub Bot logged work on HDFS-16175:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 11:09
Start Date: 17/Aug/21 11:09
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3307:
URL: https://github.com/apache/hadoop/pull/3307#issuecomment-900204198


   OK.
   Thanks @jojochuang  for the comment. I will continue to work.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638612)
Time Spent: 50m  (was: 40m)

> Improve the configurable value of Server #PURGE_INTERVAL_NANOS
> --
>
> Key: HDFS-16175
> URL: https://issues.apache.org/jira/browse/HDFS-16175
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In Server, Server #PURGE_INTERVAL_NANOS is a fixed value, 15.
> We can try to improve the configurable value of Server #PURGE_INTERVAL_NANOS, 
> which will make RPC more flexible.
> private final static long PURGE_INTERVAL_NANOS = TimeUnit.NANOSECONDS.convert(
>   15, TimeUnit.MINUTES);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16175) Improve the configurable value of Server #PURGE_INTERVAL_NANOS

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16175?focusedWorklogId=638600=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638600
 ]

ASF GitHub Bot logged work on HDFS-16175:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 10:17
Start Date: 17/Aug/21 10:17
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3307:
URL: https://github.com/apache/hadoop/pull/3307#discussion_r690232653



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
##
@@ -494,6 +494,10 @@
 "ipc.server.log.slow.rpc";
   public static final boolean IPC_SERVER_LOG_SLOW_RPC_DEFAULT = false;
 
+  public static final String IPC_SERVER_PURGE_INTERVAL_NANOS_MINUTES_KEY =

Review comment:
   can you also add this property to core-default.xml?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638600)
Time Spent: 40m  (was: 0.5h)

> Improve the configurable value of Server #PURGE_INTERVAL_NANOS
> --
>
> Key: HDFS-16175
> URL: https://issues.apache.org/jira/browse/HDFS-16175
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In Server, Server #PURGE_INTERVAL_NANOS is a fixed value, 15.
> We can try to improve the configurable value of Server #PURGE_INTERVAL_NANOS, 
> which will make RPC more flexible.
> private final static long PURGE_INTERVAL_NANOS = TimeUnit.NANOSECONDS.convert(
>   15, TimeUnit.MINUTES);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16162) Improve DFSUtil#checkProtectedDescendants() related parameter comments

2021-08-17 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-16162.

Fix Version/s: 3.4.0
   Resolution: Fixed

> Improve DFSUtil#checkProtectedDescendants() related parameter comments
> --
>
> Key: HDFS-16162
> URL: https://issues.apache.org/jira/browse/HDFS-16162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Some parameter comments related to DFSUtil#checkProtectedDescendants() are 
> missing, for example:
> /**
>  * If the given directory has any non-empty protected descendants, throw
>  * (Including itself).
>  *
>  * @param iip directory, to check its descendants.
>  * @throws AccessControlException if it is a non-empty protected 
> descendant
>  *found.
>  * @throws ParentNotDirectoryException
>  * @throws UnresolvedLinkException
>  */
> public static void checkProtectedDescendants(
> FSDirectory fsd, INodesInPath iip)
> Throw AccessControlException, UnresolvedLinkException,
> ParentNotDirectoryException {
> The description of fsd is missing here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16162) Improve DFSUtil#checkProtectedDescendants() related parameter comments

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16162?focusedWorklogId=638575=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638575
 ]

ASF GitHub Bot logged work on HDFS-16162:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 08:47
Start Date: 17/Aug/21 08:47
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #3295:
URL: https://github.com/apache/hadoop/pull/3295#issuecomment-900112457


   merged. thanks all!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638575)
Time Spent: 1.5h  (was: 1h 20m)

> Improve DFSUtil#checkProtectedDescendants() related parameter comments
> --
>
> Key: HDFS-16162
> URL: https://issues.apache.org/jira/browse/HDFS-16162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Some parameter comments related to DFSUtil#checkProtectedDescendants() are 
> missing, for example:
> /**
>  * If the given directory has any non-empty protected descendants, throw
>  * (Including itself).
>  *
>  * @param iip directory, to check its descendants.
>  * @throws AccessControlException if it is a non-empty protected 
> descendant
>  *found.
>  * @throws ParentNotDirectoryException
>  * @throws UnresolvedLinkException
>  */
> public static void checkProtectedDescendants(
> FSDirectory fsd, INodesInPath iip)
> Throw AccessControlException, UnresolvedLinkException,
> ParentNotDirectoryException {
> The description of fsd is missing here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16162) Improve DFSUtil#checkProtectedDescendants() related parameter comments

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16162?focusedWorklogId=638574=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638574
 ]

ASF GitHub Bot logged work on HDFS-16162:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 08:47
Start Date: 17/Aug/21 08:47
Worklog Time Spent: 10m 
  Work Description: jojochuang merged pull request #3295:
URL: https://github.com/apache/hadoop/pull/3295


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638574)
Time Spent: 1h 20m  (was: 1h 10m)

> Improve DFSUtil#checkProtectedDescendants() related parameter comments
> --
>
> Key: HDFS-16162
> URL: https://issues.apache.org/jira/browse/HDFS-16162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Some parameter comments related to DFSUtil#checkProtectedDescendants() are 
> missing, for example:
> /**
>  * If the given directory has any non-empty protected descendants, throw
>  * (Including itself).
>  *
>  * @param iip directory, to check its descendants.
>  * @throws AccessControlException if it is a non-empty protected 
> descendant
>  *found.
>  * @throws ParentNotDirectoryException
>  * @throws UnresolvedLinkException
>  */
> public static void checkProtectedDescendants(
> FSDirectory fsd, INodesInPath iip)
> Throw AccessControlException, UnresolvedLinkException,
> ParentNotDirectoryException {
> The description of fsd is missing here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16173) Improve CopyCommands#Put#executor queue configurability

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16173?focusedWorklogId=638555=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638555
 ]

ASF GitHub Bot logged work on HDFS-16173:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 07:31
Start Date: 17/Aug/21 07:31
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3302:
URL: https://github.com/apache/hadoop/pull/3302#issuecomment-900063826


   Thanks @virajjasani  for the comment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638555)
Time Spent: 1h  (was: 50m)

> Improve CopyCommands#Put#executor queue configurability
> ---
>
> Key: HDFS-16173
> URL: https://issues.apache.org/jira/browse/HDFS-16173
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> In CopyCommands#Put, the number of executor queues is a fixed value, 1024.
> We should make him configurable, because there are different usage 
> environments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16162) Improve DFSUtil#checkProtectedDescendants() related parameter comments

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16162?focusedWorklogId=638551=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638551
 ]

ASF GitHub Bot logged work on HDFS-16162:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 07:27
Start Date: 17/Aug/21 07:27
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3295:
URL: https://github.com/apache/hadoop/pull/3295#issuecomment-900061720


   Thanks @virajjasani  for the comment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638551)
Time Spent: 1h 10m  (was: 1h)

> Improve DFSUtil#checkProtectedDescendants() related parameter comments
> --
>
> Key: HDFS-16162
> URL: https://issues.apache.org/jira/browse/HDFS-16162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Some parameter comments related to DFSUtil#checkProtectedDescendants() are 
> missing, for example:
> /**
>  * If the given directory has any non-empty protected descendants, throw
>  * (Including itself).
>  *
>  * @param iip directory, to check its descendants.
>  * @throws AccessControlException if it is a non-empty protected 
> descendant
>  *found.
>  * @throws ParentNotDirectoryException
>  * @throws UnresolvedLinkException
>  */
> public static void checkProtectedDescendants(
> FSDirectory fsd, INodesInPath iip)
> Throw AccessControlException, UnresolvedLinkException,
> ParentNotDirectoryException {
> The description of fsd is missing here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16174) Refactor TempFile and TempDir in libhdfs++

2021-08-17 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra updated HDFS-16174:
--
Summary: Refactor TempFile and TempDir in libhdfs++  (was: Refactor 
TempFile and TempDir)

> Refactor TempFile and TempDir in libhdfs++
> --
>
> Key: HDFS-16174
> URL: https://issues.apache.org/jira/browse/HDFS-16174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> In C++, we generally do the declaration in the header files and the 
> corresponding implementation in the .cc files. Here we see that the 
> implementation of TempFile and TempDir are done in configuration_test.h 
> itself. This offers no benefit and the compilation of TempFile and TempDir 
> classes are duplicated for every #include of the configuration_test.h header. 
> Thus, we need to implement it in separate cc files to avoid this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16162) Improve DFSUtil#checkProtectedDescendants() related parameter comments

2021-08-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16162?focusedWorklogId=638534=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-638534
 ]

ASF GitHub Bot logged work on HDFS-16162:
-

Author: ASF GitHub Bot
Created on: 17/Aug/21 06:20
Start Date: 17/Aug/21 06:20
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3295:
URL: https://github.com/apache/hadoop/pull/3295#issuecomment-900025678


   @ayushtkn , thanks for your comment.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 638534)
Time Spent: 1h  (was: 50m)

> Improve DFSUtil#checkProtectedDescendants() related parameter comments
> --
>
> Key: HDFS-16162
> URL: https://issues.apache.org/jira/browse/HDFS-16162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Some parameter comments related to DFSUtil#checkProtectedDescendants() are 
> missing, for example:
> /**
>  * If the given directory has any non-empty protected descendants, throw
>  * (Including itself).
>  *
>  * @param iip directory, to check its descendants.
>  * @throws AccessControlException if it is a non-empty protected 
> descendant
>  *found.
>  * @throws ParentNotDirectoryException
>  * @throws UnresolvedLinkException
>  */
> public static void checkProtectedDescendants(
> FSDirectory fsd, INodesInPath iip)
> Throw AccessControlException, UnresolvedLinkException,
> ParentNotDirectoryException {
> The description of fsd is missing here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16173) Improve CopyCommands#Put#executor queue configurability

2021-08-17 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu updated HDFS-16173:

Component/s: (was: hdfs)
 fs

> Improve CopyCommands#Put#executor queue configurability
> ---
>
> Key: HDFS-16173
> URL: https://issues.apache.org/jira/browse/HDFS-16173
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In CopyCommands#Put, the number of executor queues is a fixed value, 1024.
> We should make him configurable, because there are different usage 
> environments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org