[jira] [Updated] (HDFS-13165) [SPS]: Collects successfully moved block details via IBR

2018-04-28 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-13165:

Attachment: HDFS-13165-HDFS-10285-10.patch

> [SPS]: Collects successfully moved block details via IBR
> 
>
> Key: HDFS-13165
> URL: https://issues.apache.org/jira/browse/HDFS-13165
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Major
> Fix For: HDFS-10285
>
> Attachments: HDFS-13165-HDFS-10285-00.patch, 
> HDFS-13165-HDFS-10285-01.patch, HDFS-13165-HDFS-10285-02.patch, 
> HDFS-13165-HDFS-10285-03.patch, HDFS-13165-HDFS-10285-04.patch, 
> HDFS-13165-HDFS-10285-05.patch, HDFS-13165-HDFS-10285-06.patch, 
> HDFS-13165-HDFS-10285-07.patch, HDFS-13165-HDFS-10285-08.patch, 
> HDFS-13165-HDFS-10285-09.patch, HDFS-13165-HDFS-10285-10.patch
>
>
> This task to make use of the existing IBR to get moved block details and 
> remove unwanted future tracking logic exists in BlockStorageMovementTracker 
> code, this is no more needed as the file level tracking maintained at NN 
> itself.
> Following comments taken from HDFS-10285, 
> [here|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16347472=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16347472]
> Comment-3)
> {quote}BPServiceActor
> Is it actually sending back the moved blocks? Aren’t IBRs sufficient?{quote}
> Comment-21)
> {quote}
> BlockStorageMovementTracker
> Many data structures are riddled with non-threadsafe race conditions and risk 
> of CMEs.
> Ex. The moverTaskFutures map. Adding new blocks and/or adding to a block's 
> list of futures is synchronized. However the run loop does an unsynchronized 
> block get, unsynchronized future remove, unsynchronized isEmpty, possibly 
> another unsynchronized get, only then does it do a synchronized remove of the 
> block. The whole chunk of code should be synchronized.
> Is the problematic moverTaskFutures even needed? It's aggregating futures 
> per-block for seemingly no reason. Why track all the futures at all instead 
> of just relying on the completion service? As best I can tell:
> It's only used to determine if a future from the completion service should be 
> ignored during shutdown. Shutdown sets the running boolean to false and 
> clears the entire datastructure so why not use the running boolean like a 
> check just a little further down?
> As synchronization to sleep up to 2 seconds before performing a blocking 
> moverCompletionService.take, but only when it thinks there are no active 
> futures. I'll ignore the missed notify race that the bounded wait masks, but 
> the real question is why not just do the blocking take?
> Why all the complexity? Am I missing something?
> BlocksMovementsStatusHandler
> Suffers same type of thread safety issues as StoragePolicySatisfyWorker. Ex. 
> blockIdVsMovementStatus is inconsistent synchronized. Does synchronize to 
> return an unmodifiable list which sadly does nothing to protect the caller 
> from CME.
> handle is iterating over a non-thread safe list.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13165) [SPS]: Collects successfully moved block details via IBR

2018-04-28 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-13165:

  Resolution: Fixed
   Fix Version/s: HDFS-10285
Target Version/s:   (was: HDFS-10285)
  Status: Resolved  (was: Patch Available)

Thank you [~surendrasingh], [~daryn] for the review help. Like I mentioned 
earlier, please use HDFS-13491 jira for the datanode protocol related changes.

Committed the changes to 10285 branch. Attaching committed patch, which 
contains the minor checkstyle warning fix.

> [SPS]: Collects successfully moved block details via IBR
> 
>
> Key: HDFS-13165
> URL: https://issues.apache.org/jira/browse/HDFS-13165
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
>Priority: Major
> Fix For: HDFS-10285
>
> Attachments: HDFS-13165-HDFS-10285-00.patch, 
> HDFS-13165-HDFS-10285-01.patch, HDFS-13165-HDFS-10285-02.patch, 
> HDFS-13165-HDFS-10285-03.patch, HDFS-13165-HDFS-10285-04.patch, 
> HDFS-13165-HDFS-10285-05.patch, HDFS-13165-HDFS-10285-06.patch, 
> HDFS-13165-HDFS-10285-07.patch, HDFS-13165-HDFS-10285-08.patch, 
> HDFS-13165-HDFS-10285-09.patch
>
>
> This task to make use of the existing IBR to get moved block details and 
> remove unwanted future tracking logic exists in BlockStorageMovementTracker 
> code, this is no more needed as the file level tracking maintained at NN 
> itself.
> Following comments taken from HDFS-10285, 
> [here|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16347472=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16347472]
> Comment-3)
> {quote}BPServiceActor
> Is it actually sending back the moved blocks? Aren’t IBRs sufficient?{quote}
> Comment-21)
> {quote}
> BlockStorageMovementTracker
> Many data structures are riddled with non-threadsafe race conditions and risk 
> of CMEs.
> Ex. The moverTaskFutures map. Adding new blocks and/or adding to a block's 
> list of futures is synchronized. However the run loop does an unsynchronized 
> block get, unsynchronized future remove, unsynchronized isEmpty, possibly 
> another unsynchronized get, only then does it do a synchronized remove of the 
> block. The whole chunk of code should be synchronized.
> Is the problematic moverTaskFutures even needed? It's aggregating futures 
> per-block for seemingly no reason. Why track all the futures at all instead 
> of just relying on the completion service? As best I can tell:
> It's only used to determine if a future from the completion service should be 
> ignored during shutdown. Shutdown sets the running boolean to false and 
> clears the entire datastructure so why not use the running boolean like a 
> check just a little further down?
> As synchronization to sleep up to 2 seconds before performing a blocking 
> moverCompletionService.take, but only when it thinks there are no active 
> futures. I'll ignore the missed notify race that the bounded wait masks, but 
> the real question is why not just do the blocking take?
> Why all the complexity? Am I missing something?
> BlocksMovementsStatusHandler
> Suffers same type of thread safety issues as StoragePolicySatisfyWorker. Ex. 
> blockIdVsMovementStatus is inconsistent synchronized. Does synchronize to 
> return an unmodifiable list which sadly does nothing to protect the caller 
> from CME.
> handle is iterating over a non-thread safe list.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13515) NetUtils#connect should log remote address for NoRouteToHostException

2018-04-28 Thread lqjack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457881#comment-16457881
 ] 

lqjack commented on HDFS-13515:
---

https://github.com/apache/hadoop/pull/371

> NetUtils#connect should log remote address for NoRouteToHostException
> -
>
> Key: HDFS-13515
> URL: https://issues.apache.org/jira/browse/HDFS-13515
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> hdfs.BlockReaderFactory: I/O error constructing remote block reader.
> java.net.NoRouteToHostException: No route to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2884)
> {code}
> In the above stack trace, the remote host was not logged.
> This makes troubleshooting a bit hard.
> NetUtils#connect should log remote address for NoRouteToHostException .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13515) NetUtils#connect should log remote address for NoRouteToHostException

2018-04-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457880#comment-16457880
 ] 

ASF GitHub Bot commented on HDFS-13515:
---

GitHub user lqjack opened a pull request:

https://github.com/apache/hadoop/pull/371

HDFS-13515

provide the info when connect

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lqjack/hadoop HDFS-13515

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/371.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #371


commit 3a5575ac6d06e1f02b26cede0939f67335d36c8b
Author: lqjaclee 
Date:   2018-04-29T01:37:14Z

HDFS-13515

provide the info when connect




> NetUtils#connect should log remote address for NoRouteToHostException
> -
>
> Key: HDFS-13515
> URL: https://issues.apache.org/jira/browse/HDFS-13515
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> hdfs.BlockReaderFactory: I/O error constructing remote block reader.
> java.net.NoRouteToHostException: No route to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2884)
> {code}
> In the above stack trace, the remote host was not logged.
> This makes troubleshooting a bit hard.
> NetUtils#connect should log remote address for NoRouteToHostException .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13488) RBF: Reject requests when a Router is overloaded

2018-04-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457869#comment-16457869
 ] 

genericqa commented on HDFS-13488:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs-rbf generated 0 new 
+ 0 unchanged - 4 fixed = 0 total (was 4) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
58s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13488 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921164/HDFS-13488.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 83c940fe60e8 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / eb7fe1d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24106/testReport/ |
| Max. process+thread count | 970 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24106/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Commented] (HDFS-13488) RBF: Reject requests when a Router is overloaded

2018-04-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457855#comment-16457855
 ] 

Íñigo Goiri commented on HDFS-13488:


Thanks [~linyiqun] for the comments.
Tackled most of them in  [^HDFS-13488.004.patch].
The one missing would be the one for testOverloadControl; not sure how to 
handle that; only submitting 4 seems like skipping the test.
I increased the time between requests a little.

> RBF: Reject requests when a Router is overloaded
> 
>
> Key: HDFS-13488
> URL: https://issues.apache.org/jira/browse/HDFS-13488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13488.000.patch, HDFS-13488.001.patch, 
> HDFS-13488.002.patch, HDFS-13488.003.patch, HDFS-13488.004.patch
>
>
> A Router might be overloaded when handling special cases (e.g. a slow 
> subcluster). The Router could reject the requests and the client could try 
> with another Router. We should leverage the Standby mechanism for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13488) RBF: Reject requests when a Router is overloaded

2018-04-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13488:
---
Attachment: HDFS-13488.004.patch

> RBF: Reject requests when a Router is overloaded
> 
>
> Key: HDFS-13488
> URL: https://issues.apache.org/jira/browse/HDFS-13488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13488.000.patch, HDFS-13488.001.patch, 
> HDFS-13488.002.patch, HDFS-13488.003.patch, HDFS-13488.004.patch
>
>
> A Router might be overloaded when handling special cases (e.g. a slow 
> subcluster). The Router could reject the requests and the client could try 
> with another Router. We should leverage the Standby mechanism for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13486) Backport HDFS-11817 to branch-2.7

2018-04-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457841#comment-16457841
 ] 

Wei-Chiu Chuang commented on HDFS-13486:


TestFSImage#testCompression failure is fixable by HDFS-12156.

TestLazyPersistFile#testLazyPersistBlocksAreSaved failure is fixable by 
HDFS-9067.

 

 

> Backport HDFS-11817 to branch-2.7
> -
>
> Key: HDFS-13486
> URL: https://issues.apache.org/jira/browse/HDFS-13486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-11817.branch-2.7.001.patch
>
>
> HDFS-11817 is a good fix to have in branch-2.7.
> I'm taking a stab at it now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12156) TestFSImage fails without -Pnative

2018-04-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457831#comment-16457831
 ] 

Wei-Chiu Chuang commented on HDFS-12156:


This bug affects branch-2.7 as well.

> TestFSImage fails without -Pnative
> --
>
> Key: HDFS-12156
> URL: https://issues.apache.org/jira/browse/HDFS-12156
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.3
>
> Attachments: HDFS-12156.01.patch, HDFS-12156.02.patch
>
>
> TestFSImage#testCompression tests LZ4 codec and it fails when native library 
> is not available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13486) Backport HDFS-11817 to branch-2.7

2018-04-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457825#comment-16457825
 ] 

Wei-Chiu Chuang commented on HDFS-13486:


Looks like there are two tests that fail consistently, *even before* this patch 
 in branch-2.7.

I'll double check and file jiras accordingly.

> Backport HDFS-11817 to branch-2.7
> -
>
> Key: HDFS-13486
> URL: https://issues.apache.org/jira/browse/HDFS-13486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-11817.branch-2.7.001.patch
>
>
> HDFS-11817 is a good fix to have in branch-2.7.
> I'm taking a stab at it now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13486) Backport HDFS-11817 to branch-2.7

2018-04-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13486:
---
Target Version/s: 2.7.7

> Backport HDFS-11817 to branch-2.7
> -
>
> Key: HDFS-13486
> URL: https://issues.apache.org/jira/browse/HDFS-13486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HDFS-11817.branch-2.7.001.patch
>
>
> HDFS-11817 is a good fix to have in branch-2.7.
> I'm taking a stab at it now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13434) RBF: Fix dead links in RBF document

2018-04-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13434:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks [~chetna] for the fix and [~ajisakaa] for reporting.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> RBF: Fix dead links in RBF document
> ---
>
> Key: HDFS-13434
> URL: https://issues.apache.org/jira/browse/HDFS-13434
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Chetna Chaudhari
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-13434.patch
>
>
> There are many dead links in 
> [http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html.]
>  Let's fix them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13515) NetUtils#connect should log remote address for NoRouteToHostException

2018-04-28 Thread Ted Yu (JIRA)
Ted Yu created HDFS-13515:
-

 Summary: NetUtils#connect should log remote address for 
NoRouteToHostException
 Key: HDFS-13515
 URL: https://issues.apache.org/jira/browse/HDFS-13515
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ted Yu


{code}
hdfs.BlockReaderFactory: I/O error constructing remote block reader.
java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2884)
{code}
In the above stack trace, the remote host was not logged.
This makes troubleshooting a bit hard.

NetUtils#connect should log remote address for NoRouteToHostException .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13501) Secure Datanode stop/start from cli does not throw a valid error if HDFS_DATANODE_SECURE_USER is not set

2018-04-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457763#comment-16457763
 ] 

Allen Wittenauer commented on HDFS-13501:
-

Some important background:

One of my key goals with the rewrite was to reduce the amount of stuff that 
printed to the screen. With a few exceptions, output broke down into three 
buckets:

* stdout: vitally important information that the user either requested or can't 
act on but needs to know
* stderr: vitally important information that the user has an action they must 
take
* --debug: non-vital information that is only interesting when debugging

As a result, there are lots of places where branch-2 has output that 3.x+ does 
not.  There's not a whole lot in the bash code where 'stdout' is appropriate. 
On the flip side, there is a lot more 'stderr' output because of significantly 
better error handling. 

That said...

The problem of the missing pid file is one of them that caused me the most 
problems.  It's an error from a logical program sense, but what is the user 
action?  If the daemon is still running, but the pid file is missing, then 
something likely catastrophic happened, including a very screwed up directory 
structure/config or multiple invocations of the --daemon flag.  Both of those 
are things that are really beyond the bash code to fix. Then there is the 
opposite situation:

{code}
$ hdfs --daemon stop namenode
$ hdfs --daemon stop namenode
{code}

The daemon isn't running, and so the pid file should be gone.  Is that an error 
worth disturbing the user?  Also, how common is that?  (Morgan Freeman voice: 
It is very common.)  Then there is the old ops habit of running ps even after 
issuing stop commands because no one trusts the system...

By comparison, branch-2 does

{codes}
 echo no $command to stop
{code}

... which is mostly useless but does confirm the thinking that a missing pid 
file is primarily interpreted as "daemon is already down; no action required."

OK, fine. All of that was a bit of a dead end.  So then I thought about it from 
"what is the pid file anyway?".  Ultimately it's a file system lock for the 
bash code.  Nothing else that ships with Hadoop cares about it.  And with the 
introduction of '--daemon status,' there isn't much of a reason for anything 
else to be looking at them either. That mostly makes them private.

In the end, I opted to not print a message at all because I couldn't answer the 
"action" question.  There isn't anything for a user to do when the pid file is 
missing.  

FWIW: this also highlights the problem of what to do with the exit status.  
IIRC, it currently exits with 0 when the pid file isn't found because again, it 
is assumed that the daemon was stopped successfully the same as branch-2.  In 
one sense that feels wrong, but I felt it was better to stay compatible in this 
instance.

> Secure Datanode stop/start from cli does not throw a valid error if 
> HDFS_DATANODE_SECURE_USER is not set
> 
>
> Key: HDFS-13501
> URL: https://issues.apache.org/jira/browse/HDFS-13501
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> Secure Datanode start/stop from cli does not throw a valid error if 
> HADOOP_SECURE_DN_USER/HDFS_DATANODE_SECURE_USER is not set. If 
> HDFS_DATANODE_SECURE_USER and JSVC_HOME is not set start/stop is expected to 
> fail (when privilege ports are used) but it should show some valid message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13489) Get base snapshotable path if exists for a given path

2018-04-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457686#comment-16457686
 ] 

genericqa commented on HDFS-13489:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
7s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}219m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457683#comment-16457683
 ] 

Hudson commented on HDFS-13509:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14090 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14090/])
HDFS-13509. Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, 
(inigoiri: rev eb7fe1d588de903be2ff6e20384c25c184881532)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/LocalReplica.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java


> Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix 
> TestFileAppend failures on Windows
> 
>
> Key: HDFS-13509
> URL: https://issues.apache.org/jira/browse/HDFS-13509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13509-branch-2.000.patch, HDFS-13509.000.patch, 
> HDFS-13509.001.patch, HDFS-13509.002.patch
>
>
> breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
> while the source is still opened as input stream, which will fail and throw 
> exception on Windows. It's the cause of  unit test case 
> org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
> Windows.
> Other test cases of TestFileAppend fail randomly on Windows due to sharing 
> the same test folder, and the solution is using randomized base dir of 
> MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13509:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~surmountian] for the patch.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix 
> TestFileAppend failures on Windows
> 
>
> Key: HDFS-13509
> URL: https://issues.apache.org/jira/browse/HDFS-13509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13509-branch-2.000.patch, HDFS-13509.000.patch, 
> HDFS-13509.001.patch, HDFS-13509.002.patch
>
>
> breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
> while the source is still opened as input stream, which will fail and throw 
> exception on Windows. It's the cause of  unit test case 
> org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
> Windows.
> Other test cases of TestFileAppend fail randomly on Windows due to sharing 
> the same test folder, and the solution is using randomized base dir of 
> MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457673#comment-16457673
 ] 

Íñigo Goiri commented on HDFS-13509:


The failed unit tests are unrelated.
+1 on  [^HDFS-13509.002.patch] and  [^HDFS-13509-branch-2.000.patch].
Committing these two.

> Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix 
> TestFileAppend failures on Windows
> 
>
> Key: HDFS-13509
> URL: https://issues.apache.org/jira/browse/HDFS-13509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4
>
> Attachments: HDFS-13509-branch-2.000.patch, HDFS-13509.000.patch, 
> HDFS-13509.001.patch, HDFS-13509.002.patch
>
>
> breakHardlinks() of ReplicaInfo(branch-2)/LocalReplica(trunk) replaces file 
> while the source is still opened as input stream, which will fail and throw 
> exception on Windows. It's the cause of  unit test case 
> org.apache.hadoop.hdfs.TestFileAppend#testBreakHardlinksIfNeeded failure on 
> Windows.
> Other test cases of TestFileAppend fail randomly on Windows due to sharing 
> the same test folder, and the solution is using randomized base dir of 
> MiniDFSCluster via HDFS-13408



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13512) WebHdfs getHdfsFileStatus/getFileStatus doesn't return ecPolicy name wired in json

2018-04-28 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457625#comment-16457625
 ] 

Arpit Agarwal edited comment on HDFS-13512 at 4/28/18 2:25 PM:
---

Thanks for the patch [~ajayydv]. Could you please add a couple of test cases 
just for HdfsFileStatus to verify that:
# If policy is present but ecPolicyName is null, then non-null name is returned.
# If policy is absent, and ecPolicyName is present, then non-null name is 
returned.
# If both are null, then null is returned.


was (Author: arpitagarwal):
Thanks for the patch [~ajayydv]. Could you please a couple of test cases just 
for HdfsFileStatus to verify that:
# If policy is present but ecPolicyName is null, then non-null name is returned.
# If policy is absent, and ecPolicyName is present, then non-null name is 
returned.
# If both are null, then null is returned.

> WebHdfs getHdfsFileStatus/getFileStatus doesn't return ecPolicy name wired in 
> json
> --
>
> Key: HDFS-13512
> URL: https://issues.apache.org/jira/browse/HDFS-13512
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13512.00.patch
>
>
> Currently LISTSTATUS call to WebHdfs returns a json. These jsonArray elements 
> do have the ecPolicy name.
> But when WebHdfsFileSystem converts it back into a FileStatus object, the 
> ecPolicy is not added. This is because the json contains only the ecPolicy 
> name and this name is not sufficient to decode it back to ErasureCodingPolicy 
> object.
> While converting json back to HdfsFileStatus we should set ecPolicyName 
> whenever it is set for give file/dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13512) WebHdfs getHdfsFileStatus/getFileStatus doesn't return ecPolicy name wired in json

2018-04-28 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457625#comment-16457625
 ] 

Arpit Agarwal commented on HDFS-13512:
--

Thanks for the patch [~ajayydv]. Could you please a couple of test cases just 
for HdfsFileStatus to verify that:
# If policy is present but ecPolicyName is null, then non-null name is returned.
# If policy is absent, and ecPolicyName is present, then non-null name is 
returned.
# If both are null, then null is returned.

> WebHdfs getHdfsFileStatus/getFileStatus doesn't return ecPolicy name wired in 
> json
> --
>
> Key: HDFS-13512
> URL: https://issues.apache.org/jira/browse/HDFS-13512
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-13512.00.patch
>
>
> Currently LISTSTATUS call to WebHdfs returns a json. These jsonArray elements 
> do have the ecPolicy name.
> But when WebHdfsFileSystem converts it back into a FileStatus object, the 
> ecPolicy is not added. This is because the json contains only the ecPolicy 
> name and this name is not sufficient to decode it back to ErasureCodingPolicy 
> object.
> While converting json back to HdfsFileStatus we should set ecPolicyName 
> whenever it is set for give file/dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13431) Ozone: Ozone Shell should use RestClient and RpcClient

2018-04-28 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457617#comment-16457617
 ] 

Nanda kumar commented on HDFS-13431:


Thanks [~ljain] for working on this. Please find my review comments below.


CreateVolumeHandler.java
Line 85 - 87: {{quota}} is an optional field, if it's not passed from the 
command-line we should not set it in {{volumeArgs}} (line 94).

GetKeyHandler.java
Line 42 & 43: Length greater than 80 characters
Line 117: If {{dataFilePath}} is {{null}} we have to print some error message 
or throw an exception.

Handler.java
Line 28: Unused import
Line 90 & 91: We don't need to have {{scheme.equals("")}} and 
{{scheme.isEmpty()}}, one should be enough as both of them does the same 
condition check.

KeySpaceManager.java
Line 828 - 830: This change is not required. We can get KSM's hostname through 
{{getServiceList}} call.

OzoneClientFactory.java
Line 295 - 300: Why do we need this change?

OzoneVolume.java
Line 242: The method argument can be named as {{startBucket}} instead of 
{{prevBucket}}, and in the javadoc it can be explicitly mentioned that the 
{{startBucket}} will be excluded in the result iterator.

OzoneBucket.java
Line 310: For {{prevKey}}, we can do the same thing that is suggested for 
{{prevBucket}} in {{OzoneVolume}}.

TestOzoneShell.java
Line 166 - 175: Use {{KeySpaceManager#getServiceList}} for getting hostname and 
related ports.

Can we move the methods {{OzoneVolume#asVolumeInfo}}, 
{{OzoneBucket#asBucketInfo}} and {{OzoneKey#asKeyInfo}} from the respective 
classes into a common utility class. Since this functionality is used only in 
case of {{ozShell}} we don't have to expose it through 
OzoneVolume/OzoneBucket/OzoneKey.

Test-cases related to oz-shell in {{acceptance-test}} (Robotframework) is 
failing, can you please take a look at that too?
We also have to add more test-cases to {{acceptance-test}} (Robotframework) to 
cover all the shell commands.

> Ozone: Ozone Shell should use RestClient and RpcClient
> --
>
> Key: HDFS-13431
> URL: https://issues.apache.org/jira/browse/HDFS-13431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDFS-13431-HDFS-7240.001.patch, 
> HDFS-13431-HDFS-7240.002.patch, HDFS-13431-HDFS-7240.003.patch, 
> HDFS-13431.001.patch, HDFS-13431.002.patch
>
>
> Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and 
> RpcClient instead of OzoneRestClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13489) Get base snapshotable path if exists for a given path

2018-04-28 Thread Harkrishn Patro (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harkrishn Patro updated HDFS-13489:
---
Attachment: HDFS-13489.005.patch

> Get base snapshotable path if exists for a given path
> -
>
> Key: HDFS-13489
> URL: https://issues.apache.org/jira/browse/HDFS-13489
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs
>Reporter: Harkrishn Patro
>Assignee: Harkrishn Patro
>Priority: Major
> Attachments: HDFS-13489.001.patch, HDFS-13489.002.patch, 
> HDFS-13489.003.patch, HDFS-13489.004.patch, HDFS-13489.005.patch
>
>
> Currently, hdfs only lists the snapshotable paths in the filesystem. This 
> feature would add the functionality of figuring out if a given path is 
> snapshotable or not. If yes, it would return the base snapshotable path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13488) RBF: Reject requests when a Router is overloaded

2018-04-28 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457493#comment-16457493
 ] 

Yiqun Lin edited comment on HDFS-13488 at 4/28/18 9:55 AM:
---

Only some comments for the UT:
 * line54. Use {{{@link RBFConfigKeys#DFS_ROUTER_CLIENT_REJECT_OVERLOAD}}} to 
replace {{DFS_ROUTER_CLIENT_REJECT_OVERLOAD}}.

 * line59. LOG instance got incorrectly.
 * line130. The test will be failed sometimes in this line. Stack info got from 
my local:
{noformat}
java.lang.AssertionError: expected:<0> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:231)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:168)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:160)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloadControl(TestRouterClientRejectOverload.java:130)
{noformat}
Even though we don't simulate the subcluster, there will be also a chance the 
minicluster is busy and lead the Router overloaded. Maybe we should use the 
client thread number {{4}} as the requests number. {{testOverloaded(0, 0, 
address, clientConf, 4);}}  At least, we can ensure the Router won't be 
overloaded if the thread number is enough.

 * line140: This place also have a chance being failed. The failure:
{noformat}
java.lang.AssertionError: Expected <=4 but was 6
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:237)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:169)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloadControl(TestRouterClientRejectOverload.java:140)
{noformat}
If the minicluster is running very slowly, all the subsequent requests (10-4=6) 
cannot be handled. So {{testOverloaded(4, 6);}} will be better here.

 * line172: {{submitting 10 requests}} should be updated.
 * line236: {{expOverloadMin}} should be {{expOverloadMax}}.


was (Author: linyiqun):
Only some comments for the UT:
 * line54. Use {{@link RBFConfigKeys#DFS_ROUTER_CLIENT_REJECT_OVERLOAD}} to 
replace {{DFS_ROUTER_CLIENT_REJECT_OVERLOAD}}.

 * line59. LOG instance got incorrectly.
 * line130. The test will be failed sometimes in this line. Stack info got from 
my local:
{noformat}
java.lang.AssertionError: expected:<0> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:231)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:168)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:160)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloadControl(TestRouterClientRejectOverload.java:130)
{noformat}
Even though we don't simulate the subcluster, there will be also a chance the 
minicluster is busy and lead the Router overloaded. Maybe we should use the 
client thread number {{4}} as the requests number. {{testOverloaded(0, 0, 
address, clientConf, 4);}}  At least, we can ensure the Router won't be 
overloaded if the thread number is enough.

 * line140: This place also have a chance being failed. The failure:
{noformat}
java.lang.AssertionError: Expected <=4 but was 6
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:237)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:169)
at 

[jira] [Comment Edited] (HDFS-13488) RBF: Reject requests when a Router is overloaded

2018-04-28 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457493#comment-16457493
 ] 

Yiqun Lin edited comment on HDFS-13488 at 4/28/18 9:54 AM:
---

Only some comments for the UT:
 * line54. Use {{@link RBFConfigKeys#DFS_ROUTER_CLIENT_REJECT_OVERLOAD}} to 
replace {{DFS_ROUTER_CLIENT_REJECT_OVERLOAD}}.

 * line59. LOG instance got incorrectly.
 * line130. The test will be failed sometimes in this line. Stack info got from 
my local:
{noformat}
java.lang.AssertionError: expected:<0> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:231)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:168)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:160)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloadControl(TestRouterClientRejectOverload.java:130)
{noformat}
Even though we don't simulate the subcluster, there will be also a chance the 
minicluster is busy and lead the Router overloaded. Maybe we should use the 
client thread number {{4}} as the requests number. {{testOverloaded(0, 0, 
address, clientConf, 4);}}  At least, we can ensure the Router won't be 
overloaded if the thread number is enough.

 * line140: This place also have a chance being failed. The failure:
{noformat}
java.lang.AssertionError: Expected <=4 but was 6
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:237)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:169)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloadControl(TestRouterClientRejectOverload.java:140)
{noformat}
If the minicluster is running very slowly, all the subsequent requests (10-4=6) 
cannot be handled. So {{testOverloaded(4, 6);}} will be better here.

 * line172: {{submitting 10 requests}} should be updated.
* line236: {{expOverloadMin}} should be {{expOverloadMax}}.


was (Author: linyiqun):
Only some comments for the UT:
 * line54. Use \{{ {@link RBFConfigKeys#DFS_ROUTER_CLIENT_REJECT_OVERLOAD}}} to 
replace {{DFS_ROUTER_CLIENT_REJECT_OVERLOAD}}.

 * line59. LOG instance got incorrectly.
 * line130. The test will be failed sometimes in this line. Stack info got from 
my local:
{noformat}
java.lang.AssertionError: expected:<0> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:231)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:168)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:160)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloadControl(TestRouterClientRejectOverload.java:130)
{noformat}
Even though we don't simulate the subcluster, there will be also a chance the 
minicluster is busy and lead the Router overloaded. Maybe we should use the 
client thread number {{4}} as the requests number. {{testOverloaded(0, 0, 
address, clientConf, 4);}}  At least, we can ensure the Router won't be 
overloaded if the thread number is enough.

 * line140: This place also have a chance being failed. The failure:
{noformat}
java.lang.AssertionError: Expected <=4 but was 6
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:237)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:169)
at 

[jira] [Commented] (HDFS-13488) RBF: Reject requests when a Router is overloaded

2018-04-28 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457493#comment-16457493
 ] 

Yiqun Lin commented on HDFS-13488:
--

Only some comments for the UT:
 * line54. Use \{{ {@link RBFConfigKeys#DFS_ROUTER_CLIENT_REJECT_OVERLOAD}}} to 
replace {{DFS_ROUTER_CLIENT_REJECT_OVERLOAD}}.

 * line59. LOG instance got incorrectly.
 * line130. The test will be failed sometimes in this line. Stack info got from 
my local:
{noformat}
java.lang.AssertionError: expected:<0> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:231)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:168)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:160)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloadControl(TestRouterClientRejectOverload.java:130)
{noformat}
Even though we don't simulate the subcluster, there will be also a chance the 
minicluster is busy and lead the Router overloaded. Maybe we should use the 
client thread number {{4}} as the requests number. {{testOverloaded(0, 0, 
address, clientConf, 4);}}  At least, we can ensure the Router won't be 
overloaded if the thread number is enough.

 * line140: This place also have a chance being failed. The failure:
{noformat}
java.lang.AssertionError: Expected <=4 but was 6
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:237)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloaded(TestRouterClientRejectOverload.java:169)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload.testOverloadControl(TestRouterClientRejectOverload.java:140)
{noformat}
If the minicluster is running very slowly, all the subsequent requests (10-4=6) 
cannot be handled. So {{testOverloaded(4, 6);}} will be better here.

 * line172: {{submitting 10 requests}} should be updated.

> RBF: Reject requests when a Router is overloaded
> 
>
> Key: HDFS-13488
> URL: https://issues.apache.org/jira/browse/HDFS-13488
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13488.000.patch, HDFS-13488.001.patch, 
> HDFS-13488.002.patch, HDFS-13488.003.patch
>
>
> A Router might be overloaded when handling special cases (e.g. a slow 
> subcluster). The Router could reject the requests and the client could try 
> with another Router. We should leverage the Standby mechanism for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12136) BlockSender performance regression due to volume scanner edge case

2018-04-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457443#comment-16457443
 ] 

genericqa commented on HDFS-12136:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-12136 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12136 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877142/HDFS-12136.trunk.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24104/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BlockSender performance regression due to volume scanner edge case
> --
>
> Key: HDFS-12136
> URL: https://issues.apache.org/jira/browse/HDFS-12136
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-12136.branch-2.patch, HDFS-12136.trunk.patch
>
>
> HDFS-11160 attempted to fix a volume scan race for a file appended mid-scan 
> by reading the last checksum of finalized blocks within the {{BlockSender}} 
> ctor.  Unfortunately it's holding the exclusive dataset lock to open and read 
> the metafile multiple times  Block sender instantiation becomes serialized.
> Performance completely collapses under heavy disk i/o utilization or high 
> xceiver activity.  Ex. lost node replication, balancing, or decommissioning.  
> The xceiver threads congest creating block senders and impair the heartbeat 
> processing that is contending for the same lock.  Combined with other lock 
> contention issues, pipelines break and nodes sporadically go dead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12136) BlockSender performance regression due to volume scanner edge case

2018-04-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457439#comment-16457439
 ] 

Junping Du commented on HDFS-12136:
---

Thanks [~jojochuang] for comments. I agree with you that previous lock issue 
get resolved in HDFS-11187. [~daryn], I will go ahead to resolve this as 
duplicated of HDFS-11187 if you also agree.

> BlockSender performance regression due to volume scanner edge case
> --
>
> Key: HDFS-12136
> URL: https://issues.apache.org/jira/browse/HDFS-12136
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-12136.branch-2.patch, HDFS-12136.trunk.patch
>
>
> HDFS-11160 attempted to fix a volume scan race for a file appended mid-scan 
> by reading the last checksum of finalized blocks within the {{BlockSender}} 
> ctor.  Unfortunately it's holding the exclusive dataset lock to open and read 
> the metafile multiple times  Block sender instantiation becomes serialized.
> Performance completely collapses under heavy disk i/o utilization or high 
> xceiver activity.  Ex. lost node replication, balancing, or decommissioning.  
> The xceiver threads congest creating block senders and impair the heartbeat 
> processing that is contending for the same lock.  Combined with other lock 
> contention issues, pipelines break and nodes sporadically go dead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13509) Bug fix for breakHardlinks() of ReplicaInfo/LocalReplica, and fix TestFileAppend failures on Windows

2018-04-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16457384#comment-16457384
 ] 

genericqa commented on HDFS-13509:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 23s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13509 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921095/HDFS-13509.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 32903422201a 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4844406 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24103/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test