[jira] [Work logged] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?focusedWorklogId=501405=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501405
 ]

ASF GitHub Bot logged work on HDFS-15633:
-

Author: ASF GitHub Bot
Created on: 16/Oct/20 05:07
Start Date: 16/Oct/20 05:07
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #2386:
URL: https://github.com/apache/hadoop/pull/2386#issuecomment-709783162


   Thanx @ferhui for the review!!!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 501405)
Time Spent: 1h 50m  (was: 1h 40m)

> Avoid redundant RPC calls for getDiskStatus
> ---
>
> Key: HDFS-15633
> URL: https://issues.apache.org/jira/browse/HDFS-15633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> There are 3 RPC calls to fetch the same values :
> {code:java}
>   public FsStatus getDiskStatus() throws IOException {
> return new FsStatus(getStateByIndex(0),
> getStateByIndex(1), getStateByIndex(2));
>   }
> {code}
> {{getStateByIndex()}} is called thrice, which is actually a {{getStats}} RPC 
> to namenode, The same could have been achieved by just one call



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15633:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged to trunk!!!

> Avoid redundant RPC calls for getDiskStatus
> ---
>
> Key: HDFS-15633
> URL: https://issues.apache.org/jira/browse/HDFS-15633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> There are 3 RPC calls to fetch the same values :
> {code:java}
>   public FsStatus getDiskStatus() throws IOException {
> return new FsStatus(getStateByIndex(0),
> getStateByIndex(1), getStateByIndex(2));
>   }
> {code}
> {{getStateByIndex()}} is called thrice, which is actually a {{getStats}} RPC 
> to namenode, The same could have been achieved by just one call



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?focusedWorklogId=501404=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501404
 ]

ASF GitHub Bot logged work on HDFS-15633:
-

Author: ASF GitHub Bot
Created on: 16/Oct/20 05:06
Start Date: 16/Oct/20 05:06
Worklog Time Spent: 10m 
  Work Description: ayushtkn merged pull request #2386:
URL: https://github.com/apache/hadoop/pull/2386


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 501404)
Time Spent: 1h 40m  (was: 1.5h)

> Avoid redundant RPC calls for getDiskStatus
> ---
>
> Key: HDFS-15633
> URL: https://issues.apache.org/jira/browse/HDFS-15633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> There are 3 RPC calls to fetch the same values :
> {code:java}
>   public FsStatus getDiskStatus() throws IOException {
> return new FsStatus(getStateByIndex(0),
> getStateByIndex(1), getStateByIndex(2));
>   }
> {code}
> {{getStateByIndex()}} is called thrice, which is actually a {{getStats}} RPC 
> to namenode, The same could have been achieved by just one call



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15630) RBF: Fix wrong client IP info in CallerContext when requests mount points with multi-destinations.

2020-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17215177#comment-17215177
 ] 

Hadoop QA commented on HDFS-15630:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
59s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 2 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
31s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 26s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
13s{color} |  | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} |  | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | 
[/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt|https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/236/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt]
 | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 34s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} |  | {color:green} the patch 

[jira] [Commented] (HDFS-15618) Improve datanode shutdown latency

2020-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17215153#comment-17215153
 ] 

Hadoop QA commented on HDFS-15618:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
19s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 1 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 4s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 25s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
9s{color} |  | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
6s{color} |  | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} |  | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 21s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} |  | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:red}-1{color} | {color:red} unit {color} | 

[jira] [Commented] (HDFS-15630) RBF: Fix wrong client IP info in CallerContext when requests mount points with multi-destinations.

2020-10-15 Thread Hui Fei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17215137#comment-17215137
 ] 

Hui Fei commented on HDFS-15630:


[~smarthan] Thanks for you patch.
Some comments

{code:java}
+if (clientIp == null || clientIp.length() == 0) {
+  return;
+}
{code}
Client ip will not be null or empty?

{code:java}
+if (origContext != null && origContext.contains(clientIp)) {
+  return;
+}
{code}
context contains clientIp:x.x.x.x because the callercontext added clientip is 
used again? If the param callercontext is the original one remote client sends, 
we use that and it will not be a problem?


{code:java}
-// Current callerContext is null
-assertNull(CallerContext.getCurrent());
-
 // Set client context
 CallerContext.setCurrent(
 new CallerContext.Builder("clientContext").build());
 
+// Assert the initial caller context as expected
+assertEquals("clientContext", CallerContext.getCurrent().getContext());
+
{code}
Why remove the assert? Feel strange context is not null before we set.


{code:java}
+String expectContext = "callerContext=clientContext,clientIp:"
++ InetAddress.getLocalHost().getHostAddress();
{code}
Not sure that Whether it is expected when a machine has more eth interfaces, 
pending jenkins


> RBF: Fix wrong client IP info in CallerContext when requests mount points 
> with multi-destinations.
> --
>
> Key: HDFS-15630
> URL: https://issues.apache.org/jira/browse/HDFS-15630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15630.001.patch, HDFS-15630.002.patch, 
> HDFS-15630.003.patch
>
>
> There are two issues about client IP info in CallerContext when we try to 
> request mount points with multi-destinations.
>  # the clientIp would duplicate in CallerContext when 
> RouterRpcClient#invokeSequential.
>  # the clientIp would miss in CallerContext when 
> RouterRpcClient#invokeConcurrent. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15630) RBF: Fix wrong client IP info in CallerContext when requests mount points with multi-destinations.

2020-10-15 Thread Chengwei Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17215132#comment-17215132
 ] 

Chengwei Wang commented on HDFS-15630:
--

Submit patch v003 to fix the java doc and UT.

[~elgoiri] [~ferhui], could you help to take a look again?

> RBF: Fix wrong client IP info in CallerContext when requests mount points 
> with multi-destinations.
> --
>
> Key: HDFS-15630
> URL: https://issues.apache.org/jira/browse/HDFS-15630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15630.001.patch, HDFS-15630.002.patch, 
> HDFS-15630.003.patch
>
>
> There are two issues about client IP info in CallerContext when we try to 
> request mount points with multi-destinations.
>  # the clientIp would duplicate in CallerContext when 
> RouterRpcClient#invokeSequential.
>  # the clientIp would miss in CallerContext when 
> RouterRpcClient#invokeConcurrent. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14383) Compute datanode load based on StoragePolicy

2020-10-15 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-14383:
---

Assignee: Ayush Saxena

> Compute datanode load based on StoragePolicy
> 
>
> Key: HDFS-14383
> URL: https://issues.apache.org/jira/browse/HDFS-14383
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.7.3, 3.1.2
>Reporter: Karthik Palanisamy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14383-01.patch, HDFS-14383-02.patch
>
>
> Datanode load check logic needs to be changed because existing computation 
> will not consider StoragePolicy.
> DatanodeManager#getInServiceXceiverAverage
> {code}
> public double getInServiceXceiverAverage() {
>  double avgLoad = 0;
>  final int nodes = getNumDatanodesInService();
>  if (nodes != 0) {
>  final int xceivers = heartbeatManager
>  .getInServiceXceiverCount();
>  avgLoad = (double)xceivers/nodes;
>  }
>  return avgLoad;
> }
> {code}
>  
> For example: with 10 nodes (HOT), average 50 xceivers and 90 nodes (COLD) 
> with average 10 xceivers the calculated threshold by the NN is 28 (((500 + 
> 900)/100)*2), which means those 10 nodes (the whole HOT tier) becomes 
> unavailable when the COLD tier nodes are barely in use. Turning this check 
> off helps to mitigate this issue, however the 
> dfs.namenode.replication.considerLoad helps to "balance" the load of the DNs, 
> upon turning it off can lead to situations where specific DNs are 
> "overloaded".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15630) RBF: Fix wrong client IP info in CallerContext when requests mount points with multi-destinations.

2020-10-15 Thread Chengwei Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengwei Wang updated HDFS-15630:
-
Attachment: HDFS-15630.003.patch

> RBF: Fix wrong client IP info in CallerContext when requests mount points 
> with multi-destinations.
> --
>
> Key: HDFS-15630
> URL: https://issues.apache.org/jira/browse/HDFS-15630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15630.001.patch, HDFS-15630.002.patch, 
> HDFS-15630.003.patch
>
>
> There are two issues about client IP info in CallerContext when we try to 
> request mount points with multi-destinations.
>  # the clientIp would duplicate in CallerContext when 
> RouterRpcClient#invokeSequential.
>  # the clientIp would miss in CallerContext when 
> RouterRpcClient#invokeConcurrent. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?focusedWorklogId=501366=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501366
 ]

ASF GitHub Bot logged work on HDFS-15633:
-

Author: ASF GitHub Bot
Created on: 16/Oct/20 02:18
Start Date: 16/Oct/20 02:18
Worklog Time Spent: 10m 
  Work Description: ferhui commented on pull request #2386:
URL: https://github.com/apache/hadoop/pull/2386#issuecomment-709689366


   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 501366)
Time Spent: 1.5h  (was: 1h 20m)

> Avoid redundant RPC calls for getDiskStatus
> ---
>
> Key: HDFS-15633
> URL: https://issues.apache.org/jira/browse/HDFS-15633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There are 3 RPC calls to fetch the same values :
> {code:java}
>   public FsStatus getDiskStatus() throws IOException {
> return new FsStatus(getStateByIndex(0),
> getStateByIndex(1), getStateByIndex(2));
>   }
> {code}
> {{getStateByIndex()}} is called thrice, which is actually a {{getStats}} RPC 
> to namenode, The same could have been achieved by just one call



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15634) Invalidate block on decommissioning DataNode after replication

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15634?focusedWorklogId=501363=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501363
 ]

ASF GitHub Bot logged work on HDFS-15634:
-

Author: ASF GitHub Bot
Created on: 16/Oct/20 02:13
Start Date: 16/Oct/20 02:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2388:
URL: https://github.com/apache/hadoop/pull/2388#issuecomment-709687903


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  28m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  2s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  0s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 58s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  95m 11s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2388/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 207m 14s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2388/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2388 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2bbb452b3bd4 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | 

[jira] [Work logged] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?focusedWorklogId=501360=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501360
 ]

ASF GitHub Bot logged work on HDFS-15633:
-

Author: ASF GitHub Bot
Created on: 16/Oct/20 01:56
Start Date: 16/Oct/20 01:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2386:
URL: https://github.com/apache/hadoop/pull/2386#issuecomment-709300808


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   3m  3s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 24s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   4m 27s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  hadoop-hdfs-client in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   0m 55s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |  21m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |  23m  6s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 42s | 
[/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in trunk failed.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 41s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in the patch failed.  |
   | -1 :x: |  compile  |   0m 47s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   0m 47s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 41s | 

[jira] [Work logged] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?focusedWorklogId=501359=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501359
 ]

ASF GitHub Bot logged work on HDFS-15633:
-

Author: ASF GitHub Bot
Created on: 16/Oct/20 01:56
Start Date: 16/Oct/20 01:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2386:
URL: https://github.com/apache/hadoop/pull/2386#issuecomment-70939


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 11s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  4s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 59s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | -1 :x: |  shadedclient  |  25m  6s |  |  patch has errors when building 
and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 27s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 27s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  hadoop-hdfs-client in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  findbugs  |   0m 26s | 
[/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/4/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in the patch failed.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m 28s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 28s |  |  ASF License check generated no 
output?  |
   |  |   |  96m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/4/artifact/out/Dockerfile
 |
   | GITHUB PR | 

[jira] [Work logged] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?focusedWorklogId=501358=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501358
 ]

ASF GitHub Bot logged work on HDFS-15633:
-

Author: ASF GitHub Bot
Created on: 16/Oct/20 01:55
Start Date: 16/Oct/20 01:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2386:
URL: https://github.com/apache/hadoop/pull/2386#issuecomment-709047739


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  44m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |  27m 59s |  |  branch has errors when building 
and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |  33m  9s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   3m 21s | 
[/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in trunk failed.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 11s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in the patch failed.  |
   | -1 :x: |  compile  |   0m 18s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   0m 18s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 11s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  hadoop-hdfs-client in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 11s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  hadoop-hdfs-client in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m  6s | 
[/buildtool-patch-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/buildtool-patch-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  The patch fails to run checkstyle in hadoop-hdfs-client  |
   | -1 :x: |  mvnsite  |   0m  9s | 

[jira] [Updated] (HDFS-15618) Improve datanode shutdown latency

2020-10-15 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HDFS-15618:
-
Attachment: HDFS-15618.003.patch

> Improve datanode shutdown latency
> -
>
> Key: HDFS-15618
> URL: https://issues.apache.org/jira/browse/HDFS-15618
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HDFS-15618.001.patch, HDFS-15618.002.patch, 
> HDFS-15618.003.patch
>
>
> The shutdown of Datanode is a very long latency. A block scanner waits for 5 
> minutes to join on each VolumeScanner thread.
> Since the scanners are daemon threads and do not alter the block content, it 
> is safe to ignore such conditions on shutdown of Datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15634) Invalidate block on decommissioning DataNode after replication

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15634?focusedWorklogId=501332=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501332
 ]

ASF GitHub Bot logged work on HDFS-15634:
-

Author: ASF GitHub Bot
Created on: 15/Oct/20 23:51
Start Date: 15/Oct/20 23:51
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #2388:
URL: https://github.com/apache/hadoop/pull/2388#discussion_r505926806



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##
@@ -3512,7 +3512,11 @@ private Block addStoredBlock(final BlockInfo block,
 int numUsableReplicas = num.liveReplicas() +
 num.decommissioning() + num.liveEnteringMaintenanceReplicas();
 
-if(storedBlock.getBlockUCState() == BlockUCState.COMMITTED &&
+
+// if block is still under construction, then done for now
+if (!storedBlock.isCompleteOrCommitted()) {

Review comment:
   Why do we move this block here?
   BTW we can leave it as a single if with a return.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##
@@ -3559,9 +3558,26 @@ private Block addStoredBlock(final BlockInfo block,
 if ((corruptReplicasCount > 0) && (numLiveReplicas >= fileRedundancy)) {
   invalidateCorruptReplicas(storedBlock, reportedBlock, num);
 }
+if (shouldInvalidateDecommissionedRedundancy(num, fileRedundancy)) {
+  for (DatanodeStorageInfo storage : blocksMap.getStorages(block)) {
+final DatanodeDescriptor datanode = storage.getDatanodeDescriptor();
+if (datanode.isDecommissioned()
+|| datanode.isDecommissionInProgress()) {
+  addToInvalidates(storedBlock, datanode);
+}
+  }
+}
 return storedBlock;
   }
 
+  // If there are enough live replicas, start invalidating
+  // decommissioned + decommissioning replicas
+  private boolean shouldInvalidateDecommissionedRedundancy(NumberReplicas num,

Review comment:
   It makes sense. Maybe we should describe some of the JIRA description in 
this method to explain what we are doing in the high level.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 501332)
Time Spent: 20m  (was: 10m)

> Invalidate block on decommissioning DataNode after replication
> --
>
> Key: HDFS-15634
> URL: https://issues.apache.org/jira/browse/HDFS-15634
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Right now when a DataNode starts decommission, Namenode will mark it as 
> decommissioning and its blocks will be replicated over to different 
> DataNodes, then marked as decommissioned. These blocks are not touched since 
> they are not counted as live replicas.
> Proposal: Invalidate these blocks once they are replicated and there are 
> enough live replicas in the cluster.
> Reason: A recent shutdown of decommissioned datanodes to finished the flow 
> caused Namenode latency spike since namenode needs to remove all of the 
> blocks from its memory and this step requires holding write lock. If we have 
> gradually invalidated these blocks the deletion will be much easier and 
> faster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15634) Invalidate block on decommissioning DataNode after replication

2020-10-15 Thread Fengnan Li (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17215060#comment-17215060
 ] 

Fengnan Li commented on HDFS-15634:
---

[~elgoiri] Thanks for the quick feedback. I have included a WIP PR.

> Invalidate block on decommissioning DataNode after replication
> --
>
> Key: HDFS-15634
> URL: https://issues.apache.org/jira/browse/HDFS-15634
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now when a DataNode starts decommission, Namenode will mark it as 
> decommissioning and its blocks will be replicated over to different 
> DataNodes, then marked as decommissioned. These blocks are not touched since 
> they are not counted as live replicas.
> Proposal: Invalidate these blocks once they are replicated and there are 
> enough live replicas in the cluster.
> Reason: A recent shutdown of decommissioned datanodes to finished the flow 
> caused Namenode latency spike since namenode needs to remove all of the 
> blocks from its memory and this step requires holding write lock. If we have 
> gradually invalidated these blocks the deletion will be much easier and 
> faster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15634) Invalidate block on decommissioning DataNode after replication

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15634:
--
Labels: pull-request-available  (was: )

> Invalidate block on decommissioning DataNode after replication
> --
>
> Key: HDFS-15634
> URL: https://issues.apache.org/jira/browse/HDFS-15634
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now when a DataNode starts decommission, Namenode will mark it as 
> decommissioning and its blocks will be replicated over to different 
> DataNodes, then marked as decommissioned. These blocks are not touched since 
> they are not counted as live replicas.
> Proposal: Invalidate these blocks once they are replicated and there are 
> enough live replicas in the cluster.
> Reason: A recent shutdown of decommissioned datanodes to finished the flow 
> caused Namenode latency spike since namenode needs to remove all of the 
> blocks from its memory and this step requires holding write lock. If we have 
> gradually invalidated these blocks the deletion will be much easier and 
> faster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15634) Invalidate block on decommissioning DataNode after replication

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15634?focusedWorklogId=501325=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501325
 ]

ASF GitHub Bot logged work on HDFS-15634:
-

Author: ASF GitHub Bot
Created on: 15/Oct/20 22:45
Start Date: 15/Oct/20 22:45
Worklog Time Spent: 10m 
  Work Description: fengnanli opened a new pull request #2388:
URL: https://github.com/apache/hadoop/pull/2388


   … Datanodes
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 501325)
Remaining Estimate: 0h
Time Spent: 10m

> Invalidate block on decommissioning DataNode after replication
> --
>
> Key: HDFS-15634
> URL: https://issues.apache.org/jira/browse/HDFS-15634
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now when a DataNode starts decommission, Namenode will mark it as 
> decommissioning and its blocks will be replicated over to different 
> DataNodes, then marked as decommissioned. These blocks are not touched since 
> they are not counted as live replicas.
> Proposal: Invalidate these blocks once they are replicated and there are 
> enough live replicas in the cluster.
> Reason: A recent shutdown of decommissioned datanodes to finished the flow 
> caused Namenode latency spike since namenode needs to remove all of the 
> blocks from its memory and this step requires holding write lock. If we have 
> gradually invalidated these blocks the deletion will be much easier and 
> faster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15634) Invalidate block on decommissioning DataNode after replication

2020-10-15 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214965#comment-17214965
 ] 

Íñigo Goiri commented on HDFS-15634:


The proposal makes sense.
The only issue is that at the scale that this would matter, it may have some 
weird side effects.
Do you have a WIP patch to see what this would look like?


> Invalidate block on decommissioning DataNode after replication
> --
>
> Key: HDFS-15634
> URL: https://issues.apache.org/jira/browse/HDFS-15634
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>
> Right now when a DataNode starts decommission, Namenode will mark it as 
> decommissioning and its blocks will be replicated over to different 
> DataNodes, then marked as decommissioned. These blocks are not touched since 
> they are not counted as live replicas.
> Proposal: Invalidate these blocks once they are replicated and there are 
> enough live replicas in the cluster.
> Reason: A recent shutdown of decommissioned datanodes to finished the flow 
> caused Namenode latency spike since namenode needs to remove all of the 
> blocks from its memory and this step requires holding write lock. If we have 
> gradually invalidated these blocks the deletion will be much easier and 
> faster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15634) Invalidate block on decommissioning DataNode after replication

2020-10-15 Thread Fengnan Li (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214923#comment-17214923
 ] 

Fengnan Li commented on HDFS-15634:
---

[~ayushtkn] [~inigoiri] [~weichiu] Can you share your thoughts?

> Invalidate block on decommissioning DataNode after replication
> --
>
> Key: HDFS-15634
> URL: https://issues.apache.org/jira/browse/HDFS-15634
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
>
> Right now when a DataNode starts decommission, Namenode will mark it as 
> decommissioning and its blocks will be replicated over to different 
> DataNodes, then marked as decommissioned. These blocks are not touched since 
> they are not counted as live replicas.
> Proposal: Invalidate these blocks once they are replicated and there are 
> enough live replicas in the cluster.
> Reason: A recent shutdown of decommissioned datanodes to finished the flow 
> caused Namenode latency spike since namenode needs to remove all of the 
> blocks from its memory and this step requires holding write lock. If we have 
> gradually invalidated these blocks the deletion will be much easier and 
> faster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15634) Invalidate block on decommissioning DataNode after replication

2020-10-15 Thread Fengnan Li (Jira)
Fengnan Li created HDFS-15634:
-

 Summary: Invalidate block on decommissioning DataNode after 
replication
 Key: HDFS-15634
 URL: https://issues.apache.org/jira/browse/HDFS-15634
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Fengnan Li
Assignee: Fengnan Li


Right now when a DataNode starts decommission, Namenode will mark it as 
decommissioning and its blocks will be replicated over to different DataNodes, 
then marked as decommissioned. These blocks are not touched since they are not 
counted as live replicas.

Proposal: Invalidate these blocks once they are replicated and there are enough 
live replicas in the cluster.

Reason: A recent shutdown of decommissioned datanodes to finished the flow 
caused Namenode latency spike since namenode needs to remove all of the blocks 
from its memory and this step requires holding write lock. If we have gradually 
invalidated these blocks the deletion will be much easier and faster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15630) RBF: Fix wrong client IP info in CallerContext when requests mount points with multi-destinations.

2020-10-15 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214816#comment-17214816
 ] 

Íñigo Goiri commented on HDFS-15630:


[~smarthan], something like that.
Before the caller context was null and now has a value, I just want to add some 
additional check to not leave the behavior unspecified.

> RBF: Fix wrong client IP info in CallerContext when requests mount points 
> with multi-destinations.
> --
>
> Key: HDFS-15630
> URL: https://issues.apache.org/jira/browse/HDFS-15630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15630.001.patch, HDFS-15630.002.patch
>
>
> There are two issues about client IP info in CallerContext when we try to 
> request mount points with multi-destinations.
>  # the clientIp would duplicate in CallerContext when 
> RouterRpcClient#invokeSequential.
>  # the clientIp would miss in CallerContext when 
> RouterRpcClient#invokeConcurrent. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15459) TestBlockTokenWithDFSStriped fails intermittently

2020-10-15 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214798#comment-17214798
 ] 

Íñigo Goiri commented on HDFS-15459:


I'm just slightly familiar with EC so I cannot tell if this is critical or not.
For the fix itself, the other option would be for isBlockTokenExpired() to 
handle the null properly too.

> TestBlockTokenWithDFSStriped fails intermittently
> -
>
> Key: HDFS-15459
> URL: https://issues.apache.org/jira/browse/HDFS-15459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: test
> Attachments: HDFS-15459.001.patch, 
> TestBlockTokenWithDFSStriped.testRead.log
>
>
> {{TestBlockTokenWithDFSStriped}} fails intermittently on trunk with a NPE. I 
> have intuition that this failure is caused by another Unit tests timing out.
> {code:bash}
> [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 94.448 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
> [ERROR] 
> testRead(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped)
>   Time elapsed: 9.455 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.isBlockTokenExpired(TestBlockTokenWithDFS.java:633)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped.isBlockTokenExpired(TestBlockTokenWithDFSStriped.java:139)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.doTestRead(TestBlockTokenWithDFS.java:508)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped.testRead(TestBlockTokenWithDFSStriped.java:92)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15459) TestBlockTokenWithDFSStriped fails intermittently

2020-10-15 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214789#comment-17214789
 ] 

Ahmed Hussein commented on HDFS-15459:
--

[~weichiu], [~inigoiri] Can you please take a look that small patch to fix the 
broken test?
The Failed Units are not related to the patch.
If there is something wrong in the way SDFSStriped are parsed, then we should 
file a different Jira. 

> TestBlockTokenWithDFSStriped fails intermittently
> -
>
> Key: HDFS-15459
> URL: https://issues.apache.org/jira/browse/HDFS-15459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: test
> Attachments: HDFS-15459.001.patch, 
> TestBlockTokenWithDFSStriped.testRead.log
>
>
> {{TestBlockTokenWithDFSStriped}} fails intermittently on trunk with a NPE. I 
> have intuition that this failure is caused by another Unit tests timing out.
> {code:bash}
> [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 94.448 s <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
> [ERROR] 
> testRead(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped)
>   Time elapsed: 9.455 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.isBlockTokenExpired(TestBlockTokenWithDFS.java:633)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped.isBlockTokenExpired(TestBlockTokenWithDFSStriped.java:139)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.doTestRead(TestBlockTokenWithDFS.java:508)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped.testRead(TestBlockTokenWithDFSStriped.java:92)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?focusedWorklogId=501117=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501117
 ]

ASF GitHub Bot logged work on HDFS-15633:
-

Author: ASF GitHub Bot
Created on: 15/Oct/20 14:56
Start Date: 15/Oct/20 14:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2386:
URL: https://github.com/apache/hadoop/pull/2386#issuecomment-709382121


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 33s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 31s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 36s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 12s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  86m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2386 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2d68d6897453 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e45407128d4 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/7/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 

[jira] [Commented] (HDFS-15618) Improve datanode shutdown latency

2020-10-15 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214754#comment-17214754
 ] 

Ahmed Hussein commented on HDFS-15618:
--

I will try to set it is a smaller value. Last thing I tried was 500ms and that 
caused some Junits to fail.
since the production configurations can be set a small value (5 seconds), I 
will try to investigate all the affected JUnits to see if something can be done 
to fix that.

> Improve datanode shutdown latency
> -
>
> Key: HDFS-15618
> URL: https://issues.apache.org/jira/browse/HDFS-15618
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HDFS-15618.001.patch, HDFS-15618.002.patch
>
>
> The shutdown of Datanode is a very long latency. A block scanner waits for 5 
> minutes to join on each VolumeScanner thread.
> Since the scanners are daemon threads and do not alter the block content, it 
> is safe to ignore such conditions on shutdown of Datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15624?focusedWorklogId=501093=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501093
 ]

ASF GitHub Bot logged work on HDFS-15624:
-

Author: ASF GitHub Bot
Created on: 15/Oct/20 13:25
Start Date: 15/Oct/20 13:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#issuecomment-709322999


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m  3s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m 21s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  23m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 32s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   8m 25s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 39s |  |  the patch passed  |
   | -1 :x: |  compile  |   9m 23s | 
[/patch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/6/artifact/out/patch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   9m 23s | 
[/patch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/6/artifact/out/patch-compile-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 10s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/6/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 10s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/6/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  checkstyle  |   3m 35s |  |  root: The patch generated 
0 new + 734 unchanged - 1 fixed = 734 total (was 735)  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/6/artifact/out/whitespace-eol.txt)
 |  The patch has 1 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  15m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   6m 39s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 55s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 128m 45s | 

[jira] [Commented] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info

2020-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214683#comment-17214683
 ] 

Hadoop QA commented on HDFS-15631:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
17s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:red}-1{color} | {color:red} @author {color} | {color:red}  0m  
0s{color} | 
[/author-tags.txt|https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/234/artifact/out/author-tags.txt]
 | {color:red} The patch appears to contain 1 @author tags which the community 
has agreed to not allow in code contributions. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 2 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
17s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 43s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
36s{color} |  | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} |  | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | 
[/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt|https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/234/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt]
 | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 6 
new + 5 unchanged - 0 fixed = 11 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 25s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} |  | {color:green} the patch passed with JDK Private 

[jira] [Work logged] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?focusedWorklogId=501088=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501088
 ]

ASF GitHub Bot logged work on HDFS-15633:
-

Author: ASF GitHub Bot
Created on: 15/Oct/20 12:50
Start Date: 15/Oct/20 12:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2386:
URL: https://github.com/apache/hadoop/pull/2386#issuecomment-709300808


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   3m  3s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 23s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 24s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   4m 27s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  hadoop-hdfs-client in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   0m 55s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |  21m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |  23m  6s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 42s | 
[/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in trunk failed.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 41s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in the patch failed.  |
   | -1 :x: |  compile  |   0m 47s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   0m 47s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/6/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 41s | 

[jira] [Updated] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info

2020-10-15 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HDFS-15631:
---
Status: Patch Available  (was: Open)

> RBF: dfsadmin -report  multiple capacity and used info
> --
>
> Key: HDFS-15631
> URL: https://issues.apache.org/jira/browse/HDFS-15631
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.0.1
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: HDFS-15631_1.patch
>
>
> When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return 
> capacity is a multiple of the number of nss 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info

2020-10-15 Thread Shen Yinjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HDFS-15631:
---
Attachment: HDFS-15631_1.patch

> RBF: dfsadmin -report  multiple capacity and used info
> --
>
> Key: HDFS-15631
> URL: https://issues.apache.org/jira/browse/HDFS-15631
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.0.1
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Attachments: HDFS-15631_1.patch
>
>
> When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return 
> capacity is a multiple of the number of nss 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15289) Allow viewfs mounts with HDFS/HCFS scheme and centralized mount table

2020-10-15 Thread Junfan Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214628#comment-17214628
 ] 

Junfan Zhang edited comment on HDFS-15289 at 10/15/20, 11:50 AM:
-

Hi [~umamaheswararao] thanks for posting this. We have also implemented a 
similar file system and have already applied it online to solve the problems of 
hybrid-cloud architecture and cluster data migration . What I want to know is: 
Does {{ViewFSOverloadScheme}} support the mounting of different cluster paths?


was (Author: zuston):
Hi [~umamaheswararao] Thanks your post. We have also implemented a similar file 
system and have already applied it online to solve the problems of hybrid-cloud 
architecture and cluster data migration . What I want to know is: Does 
{{ViewFSOverloadScheme}} support the mounting of different cluster paths?

> Allow viewfs mounts with HDFS/HCFS scheme and centralized mount table
> -
>
> Key: HDFS-15289
> URL: https://issues.apache.org/jira/browse/HDFS-15289
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Attachments: ViewFSOverloadScheme - V1.0.pdf, ViewFSOverloadScheme.png
>
>
> ViewFS provides flexibility to mount different filesystem types with mount 
> points configuration table. This approach is solving the scalability 
> problems, but users need to reconfigure the filesystem to ViewFS and to its 
> scheme.  This will be problematic in the case of paths persisted in meta 
> stores, ex: Hive. In systems like Hive, it will store uris in meta store. So, 
> changing the file system scheme will create a burden to upgrade/recreate meta 
> stores. In our experience many users are not ready to change that.  
> Router based federation is another implementation to provide coordinated 
> mount points for HDFS federation clusters. Even though this provides 
> flexibility to handle mount points easily, this will not allow 
> other(non-HDFS) file systems to mount. So, this does not solve the purpose 
> when users want to mount external(non-HDFS) filesystems.
> So, the problem here is: Even though many users want to adapt to the scalable 
> fs options available, technical challenges of changing schemes (ex: in meta 
> stores) in deployments are obstructing them. 
> So, we propose to allow hdfs scheme in ViewFS like client side mount system 
> and provision user to create mount links without changing URI paths. 
> I will upload detailed design doc shortly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15289) Allow viewfs mounts with HDFS/HCFS scheme and centralized mount table

2020-10-15 Thread Junfan Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214628#comment-17214628
 ] 

Junfan Zhang edited comment on HDFS-15289 at 10/15/20, 11:47 AM:
-

Hi [~umamaheswararao] Thanks your post. We have also implemented a similar file 
system and have already applied it online to solve the problems of hybrid-cloud 
architecture and cluster data migration . What I want to know is: Does 
{{ViewFSOverloadScheme}} support the mounting of different cluster paths?


was (Author: zuston):
Hi [~umamaheswararao] Thanks your post. We have also implemented a similar file 
system and have already applied it online to solve the problems of multi-cloud 
architecture and cluster migration . What I want to know is: Does 
{{ViewFSOverloadScheme}} support the mounting of different cluster paths?

> Allow viewfs mounts with HDFS/HCFS scheme and centralized mount table
> -
>
> Key: HDFS-15289
> URL: https://issues.apache.org/jira/browse/HDFS-15289
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Attachments: ViewFSOverloadScheme - V1.0.pdf, ViewFSOverloadScheme.png
>
>
> ViewFS provides flexibility to mount different filesystem types with mount 
> points configuration table. This approach is solving the scalability 
> problems, but users need to reconfigure the filesystem to ViewFS and to its 
> scheme.  This will be problematic in the case of paths persisted in meta 
> stores, ex: Hive. In systems like Hive, it will store uris in meta store. So, 
> changing the file system scheme will create a burden to upgrade/recreate meta 
> stores. In our experience many users are not ready to change that.  
> Router based federation is another implementation to provide coordinated 
> mount points for HDFS federation clusters. Even though this provides 
> flexibility to handle mount points easily, this will not allow 
> other(non-HDFS) file systems to mount. So, this does not solve the purpose 
> when users want to mount external(non-HDFS) filesystems.
> So, the problem here is: Even though many users want to adapt to the scalable 
> fs options available, technical challenges of changing schemes (ex: in meta 
> stores) in deployments are obstructing them. 
> So, we propose to allow hdfs scheme in ViewFS like client side mount system 
> and provision user to create mount links without changing URI paths. 
> I will upload detailed design doc shortly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15289) Allow viewfs mounts with HDFS/HCFS scheme and centralized mount table

2020-10-15 Thread Junfan Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214628#comment-17214628
 ] 

Junfan Zhang commented on HDFS-15289:
-

Hi [~umamaheswararao] Thanks your post. We have also implemented a similar file 
system and have already applied it online to solve the problems of multi-cloud 
architecture and cluster migration . What I want to know is: Does 
{{ViewFSOverloadScheme}} support the mounting of different cluster paths?

> Allow viewfs mounts with HDFS/HCFS scheme and centralized mount table
> -
>
> Key: HDFS-15289
> URL: https://issues.apache.org/jira/browse/HDFS-15289
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Attachments: ViewFSOverloadScheme - V1.0.pdf, ViewFSOverloadScheme.png
>
>
> ViewFS provides flexibility to mount different filesystem types with mount 
> points configuration table. This approach is solving the scalability 
> problems, but users need to reconfigure the filesystem to ViewFS and to its 
> scheme.  This will be problematic in the case of paths persisted in meta 
> stores, ex: Hive. In systems like Hive, it will store uris in meta store. So, 
> changing the file system scheme will create a burden to upgrade/recreate meta 
> stores. In our experience many users are not ready to change that.  
> Router based federation is another implementation to provide coordinated 
> mount points for HDFS federation clusters. Even though this provides 
> flexibility to handle mount points easily, this will not allow 
> other(non-HDFS) file systems to mount. So, this does not solve the purpose 
> when users want to mount external(non-HDFS) filesystems.
> So, the problem here is: Even though many users want to adapt to the scalable 
> fs options available, technical challenges of changing schemes (ex: in meta 
> stores) in deployments are obstructing them. 
> So, we propose to allow hdfs scheme in ViewFS like client side mount system 
> and provision user to create mount links without changing URI paths. 
> I will upload detailed design doc shortly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?focusedWorklogId=501073=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501073
 ]

ASF GitHub Bot logged work on HDFS-15633:
-

Author: ASF GitHub Bot
Created on: 15/Oct/20 11:31
Start Date: 15/Oct/20 11:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2386:
URL: https://github.com/apache/hadoop/pull/2386#issuecomment-70939


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 11s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  4s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 59s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | -1 :x: |  shadedclient  |  25m  6s |  |  patch has errors when building 
and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 27s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 27s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  hadoop-hdfs-client in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  findbugs  |   0m 26s | 
[/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/4/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in the patch failed.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m 28s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 28s |  |  ASF License check generated no 
output?  |
   |  |   |  96m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/4/artifact/out/Dockerfile
 |
   | GITHUB PR | 

[jira] [Commented] (HDFS-14383) Compute datanode load based on StoragePolicy

2020-10-15 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214617#comment-17214617
 ] 

Ayush Saxena commented on HDFS-14383:
-

Thanx [~elgoiri] for the review.
Have handled the comments i v2.
Checkstyle warning is not from new code, it was already there, Test failures 
not related, due to {{OOM}} 

> Compute datanode load based on StoragePolicy
> 
>
> Key: HDFS-14383
> URL: https://issues.apache.org/jira/browse/HDFS-14383
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.7.3, 3.1.2
>Reporter: Karthik Palanisamy
>Priority: Major
> Attachments: HDFS-14383-01.patch, HDFS-14383-02.patch
>
>
> Datanode load check logic needs to be changed because existing computation 
> will not consider StoragePolicy.
> DatanodeManager#getInServiceXceiverAverage
> {code}
> public double getInServiceXceiverAverage() {
>  double avgLoad = 0;
>  final int nodes = getNumDatanodesInService();
>  if (nodes != 0) {
>  final int xceivers = heartbeatManager
>  .getInServiceXceiverCount();
>  avgLoad = (double)xceivers/nodes;
>  }
>  return avgLoad;
> }
> {code}
>  
> For example: with 10 nodes (HOT), average 50 xceivers and 90 nodes (COLD) 
> with average 10 xceivers the calculated threshold by the NN is 28 (((500 + 
> 900)/100)*2), which means those 10 nodes (the whole HOT tier) becomes 
> unavailable when the COLD tier nodes are barely in use. Turning this check 
> off helps to mitigate this issue, however the 
> dfs.namenode.replication.considerLoad helps to "balance" the load of the DNs, 
> upon turning it off can lead to situations where specific DNs are 
> "overloaded".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14383) Compute datanode load based on StoragePolicy

2020-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214612#comment-17214612
 ] 

Hadoop QA commented on HDFS-14383:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
35s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 2 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 
14s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  8s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
4s{color} |  | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
2s{color} |  | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} |  | {color:green} the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | 
[/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt|https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/233/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt]
 | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 
576 unchanged - 1 fixed = 577 total (was 577) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} |  | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} |  | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} |  | {color:green} the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} |  | {color:green} the patch passed with JDK Private 

[jira] [Commented] (HDFS-15630) RBF: Fix wrong client IP info in CallerContext when requests mount points with multi-destinations.

2020-10-15 Thread Chengwei Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214577#comment-17214577
 ] 

Chengwei Wang commented on HDFS-15630:
--

[~elgoiri]  Thanks for the review. 
{quote}To avoid churn, let's keep the all method signature at the beginning and 
add the new one afterwards.
 Let's also have a javadoc for both; in the old one just add a comment saying 
we take the context from the server.
{quote}
I will add doc to the old method and adjust its position. 
{quote}For the test in TestRouterRpc, can we actually check that is not null 
and preferably something more specific?
{quote}
Did you mean that we should check the specified value for CallerContext in 
TestRouterRpc#testMkdirsWithCallerContext() like this:
{code:java}
String expectContext = "callerContext=clientContext,clientIp:"
+ InetAddress.getLocalHost().getHostAddress();
// Assert the caller context is correct at client side
assertEquals(expectContext, CallerContext.getCurrent().getContext());

// Assert the caller context transfer to server side correctly
for (String line : auditlog.getOutput().split("\n")) {
  if (line.contains("src=" + dirPath)) {
assertTrue(line.trim().endsWith(expectContext));
  }
}
{code}

> RBF: Fix wrong client IP info in CallerContext when requests mount points 
> with multi-destinations.
> --
>
> Key: HDFS-15630
> URL: https://issues.apache.org/jira/browse/HDFS-15630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15630.001.patch, HDFS-15630.002.patch
>
>
> There are two issues about client IP info in CallerContext when we try to 
> request mount points with multi-destinations.
>  # the clientIp would duplicate in CallerContext when 
> RouterRpcClient#invokeSequential.
>  # the clientIp would miss in CallerContext when 
> RouterRpcClient#invokeConcurrent. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?focusedWorklogId=501040=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501040
 ]

ASF GitHub Bot logged work on HDFS-15633:
-

Author: ASF GitHub Bot
Created on: 15/Oct/20 09:42
Start Date: 15/Oct/20 09:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2386:
URL: https://github.com/apache/hadoop/pull/2386#issuecomment-709047739


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  44m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |  27m 59s |  |  branch has errors when building 
and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |  33m  9s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   3m 21s | 
[/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in trunk failed.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 11s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-client in the patch failed.  |
   | -1 :x: |  compile  |   0m 18s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   0m 18s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  hadoop-hdfs-client in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 11s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  hadoop-hdfs-client in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 11s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  hadoop-hdfs-client in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m  6s | 
[/buildtool-patch-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2386/2/artifact/out/buildtool-patch-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  The patch fails to run checkstyle in hadoop-hdfs-client  |
   | -1 :x: |  mvnsite  |   0m  9s | 

[jira] [Updated] (HDFS-15191) EOF when reading legacy buffer in BlockTokenIdentifier

2020-10-15 Thread Pierre Villard (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Villard updated HDFS-15191:
--
Fix Version/s: (was: 3.3.1)
   3.3.0

> EOF when reading legacy buffer in BlockTokenIdentifier
> --
>
> Key: HDFS-15191
> URL: https://issues.apache.org/jira/browse/HDFS-15191
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: Steven Rand
>Assignee: Steven Rand
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HDFS-15191-001.patch, HDFS-15191-002.patch, 
> HDFS-15191.003.patch, HDFS-15191.004.patch
>
>
> We have an HDFS client application which recently upgraded from 3.2.0 to 
> 3.2.1. After this upgrade (but not before), we sometimes see these errors 
> when this application is used with clusters still running Hadoop 2.x (more 
> specifically CDH 5.12.1):
> {code}
> WARN  [2020-02-24T00:54:32.856Z] 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: I/O error constructing 
> remote block reader. (_sampled: true)
> java.io.EOFException:
> at java.io.DataInputStream.readByte(DataInputStream.java:272)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221)
> at 
> org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:227)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:170)
> at 
> org.apache.hadoop.hdfs.DFSUtilClient.peerFromSocketAndKey(DFSUtilClient.java:730)
> at 
> org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2942)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:822)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:747)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:380)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:575)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:757)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2314)
> at org.apache.commons.io.IOUtils.copy(IOUtils.java:2270)
> at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2291)
> at org.apache.commons.io.IOUtils.copy(IOUtils.java:2246)
> at org.apache.commons.io.IOUtils.toByteArray(IOUtils.java:765)
> {code}
> We get this warning for all DataNodes with a copy of the block, so the read 
> fails.
> I haven't been able to figure out what changed between 3.2.0 and 3.2.1 to 
> cause this, but HDFS-13617 and HDFS-14611 seem related, so tagging 
> [~vagarychen] in case you have any ideas.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15191) EOF when reading legacy buffer in BlockTokenIdentifier

2020-10-15 Thread Pierre Villard (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214526#comment-17214526
 ] 

Pierre Villard commented on HDFS-15191:
---

Updated the fix versions as this is in 3.3.0:

https://github.com/apache/hadoop/commit/f531a4a487c9133bce20d08e09da4d4a35bff13d

> EOF when reading legacy buffer in BlockTokenIdentifier
> --
>
> Key: HDFS-15191
> URL: https://issues.apache.org/jira/browse/HDFS-15191
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: Steven Rand
>Assignee: Steven Rand
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HDFS-15191-001.patch, HDFS-15191-002.patch, 
> HDFS-15191.003.patch, HDFS-15191.004.patch
>
>
> We have an HDFS client application which recently upgraded from 3.2.0 to 
> 3.2.1. After this upgrade (but not before), we sometimes see these errors 
> when this application is used with clusters still running Hadoop 2.x (more 
> specifically CDH 5.12.1):
> {code}
> WARN  [2020-02-24T00:54:32.856Z] 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: I/O error constructing 
> remote block reader. (_sampled: true)
> java.io.EOFException:
> at java.io.DataInputStream.readByte(DataInputStream.java:272)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240)
> at 
> org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221)
> at 
> org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:227)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:170)
> at 
> org.apache.hadoop.hdfs.DFSUtilClient.peerFromSocketAndKey(DFSUtilClient.java:730)
> at 
> org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2942)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:822)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:747)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:380)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:575)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:757)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:829)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2314)
> at org.apache.commons.io.IOUtils.copy(IOUtils.java:2270)
> at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2291)
> at org.apache.commons.io.IOUtils.copy(IOUtils.java:2246)
> at org.apache.commons.io.IOUtils.toByteArray(IOUtils.java:765)
> {code}
> We get this warning for all DataNodes with a copy of the block, so the read 
> fails.
> I haven't been able to figure out what changed between 3.2.0 and 3.2.1 to 
> cause this, but HDFS-13617 and HDFS-14611 seem related, so tagging 
> [~vagarychen] in case you have any ideas.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14383) Compute datanode load based on StoragePolicy

2020-10-15 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14383:

Attachment: HDFS-14383-02.patch

> Compute datanode load based on StoragePolicy
> 
>
> Key: HDFS-14383
> URL: https://issues.apache.org/jira/browse/HDFS-14383
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.7.3, 3.1.2
>Reporter: Karthik Palanisamy
>Priority: Major
> Attachments: HDFS-14383-01.patch, HDFS-14383-02.patch
>
>
> Datanode load check logic needs to be changed because existing computation 
> will not consider StoragePolicy.
> DatanodeManager#getInServiceXceiverAverage
> {code}
> public double getInServiceXceiverAverage() {
>  double avgLoad = 0;
>  final int nodes = getNumDatanodesInService();
>  if (nodes != 0) {
>  final int xceivers = heartbeatManager
>  .getInServiceXceiverCount();
>  avgLoad = (double)xceivers/nodes;
>  }
>  return avgLoad;
> }
> {code}
>  
> For example: with 10 nodes (HOT), average 50 xceivers and 90 nodes (COLD) 
> with average 10 xceivers the calculated threshold by the NN is 28 (((500 + 
> 900)/100)*2), which means those 10 nodes (the whole HOT tier) becomes 
> unavailable when the COLD tier nodes are barely in use. Turning this check 
> off helps to mitigate this issue, however the 
> dfs.namenode.replication.considerLoad helps to "balance" the load of the DNs, 
> upon turning it off can lead to situations where specific DNs are 
> "overloaded".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15633) Avoid redundant RPC calls for getDiskStatus

2020-10-15 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15633:

Status: Patch Available  (was: Open)

> Avoid redundant RPC calls for getDiskStatus
> ---
>
> Key: HDFS-15633
> URL: https://issues.apache.org/jira/browse/HDFS-15633
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are 3 RPC calls to fetch the same values :
> {code:java}
>   public FsStatus getDiskStatus() throws IOException {
> return new FsStatus(getStateByIndex(0),
> getStateByIndex(1), getStateByIndex(2));
>   }
> {code}
> {{getStateByIndex()}} is called thrice, which is actually a {{getStats}} RPC 
> to namenode, The same could have been achieved by just one call



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info

2020-10-15 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214478#comment-17214478
 ] 

Ayush Saxena commented on HDFS-15631:
-

Go ahead, feel free to shoot a patch, do include a test as well for both cases, 
where Dn's are shared and where they aren't.

> RBF: dfsadmin -report  multiple capacity and used info
> --
>
> Key: HDFS-15631
> URL: https://issues.apache.org/jira/browse/HDFS-15631
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.0.1
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
>
> When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return 
> capacity is a multiple of the number of nss 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info

2020-10-15 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15631:

Summary: RBF: dfsadmin -report  multiple capacity and used info  (was: 
dfsadmin -report with RBF returns multiple capacity and used info)

> RBF: dfsadmin -report  multiple capacity and used info
> --
>
> Key: HDFS-15631
> URL: https://issues.apache.org/jira/browse/HDFS-15631
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.0.1
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
>
> When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return 
> capacity is a multiple of the number of nss 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14383) Compute datanode load based on StoragePolicy

2020-10-15 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214464#comment-17214464
 ] 

Íñigo Goiri commented on HDFS-14383:


Minor comments:
* When initializing the {{considerLoadByStorageType}} parameter, we should 
leave {{conf.getBoolean(}} in the first line. We should also fix the comma.
* We should put the code to get {{inServiceXceiverCount}} in a function with a 
comment explaining the principle.
* Javadoc for getInServiceXceiverAverageByStorageType().
* Javadoc for {{getStorageTypeStats()}}.

Overall the unit test is pretty good at describing the behavior too.

> Compute datanode load based on StoragePolicy
> 
>
> Key: HDFS-14383
> URL: https://issues.apache.org/jira/browse/HDFS-14383
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.7.3, 3.1.2
>Reporter: Karthik Palanisamy
>Priority: Major
> Attachments: HDFS-14383-01.patch
>
>
> Datanode load check logic needs to be changed because existing computation 
> will not consider StoragePolicy.
> DatanodeManager#getInServiceXceiverAverage
> {code}
> public double getInServiceXceiverAverage() {
>  double avgLoad = 0;
>  final int nodes = getNumDatanodesInService();
>  if (nodes != 0) {
>  final int xceivers = heartbeatManager
>  .getInServiceXceiverCount();
>  avgLoad = (double)xceivers/nodes;
>  }
>  return avgLoad;
> }
> {code}
>  
> For example: with 10 nodes (HOT), average 50 xceivers and 90 nodes (COLD) 
> with average 10 xceivers the calculated threshold by the NN is 28 (((500 + 
> 900)/100)*2), which means those 10 nodes (the whole HOT tier) becomes 
> unavailable when the COLD tier nodes are barely in use. Turning this check 
> off helps to mitigate this issue, however the 
> dfs.namenode.replication.considerLoad helps to "balance" the load of the DNs, 
> upon turning it off can lead to situations where specific DNs are 
> "overloaded".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15630) RBF: Fix wrong client IP info in CallerContext when requests mount points with multi-destinations.

2020-10-15 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214462#comment-17214462
 ] 

Íñigo Goiri commented on HDFS-15630:


To avoid churn, let's keep the all method signature at the beginning and add 
the new one afterwards.
Let's also have a javadoc for both; in the old one just add a comment saying we 
take the context from the server.

For the test in TestRouterRpc, can we actually check that is not null and 
preferably something more specific?

> RBF: Fix wrong client IP info in CallerContext when requests mount points 
> with multi-destinations.
> --
>
> Key: HDFS-15630
> URL: https://issues.apache.org/jira/browse/HDFS-15630
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
> Attachments: HDFS-15630.001.patch, HDFS-15630.002.patch
>
>
> There are two issues about client IP info in CallerContext when we try to 
> request mount points with multi-destinations.
>  # the clientIp would duplicate in CallerContext when 
> RouterRpcClient#invokeSequential.
>  # the clientIp would miss in CallerContext when 
> RouterRpcClient#invokeConcurrent. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org