[jira] [Commented] (HDFS-9950) TestDecommissioningStatus fails intermittently in trunk

2016-03-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15192184#comment-15192184
 ] 

Hadoop QA commented on HDFS-9950:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 10s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 39s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 146m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793182/HDFS-9950.001.patch |
| JIRA Issue | HDFS-9950 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b4fa6036c547 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 

[jira] [Commented] (HDFS-9405) When starting a file, NameNode should generate EDEK in a separate thread

2016-03-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15192175#comment-15192175
 ] 

Xiao Chen commented on HDFS-9405:
-

Failed tests look unrelated. May I get another review? Thanks a lot.

> When starting a file, NameNode should generate EDEK in a separate thread
> 
>
> Key: HDFS-9405
> URL: https://issues.apache.org/jira/browse/HDFS-9405
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption, namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Xiao Chen
> Attachments: HDFS-9405.01.patch, HDFS-9405.02.patch, 
> HDFS-9405.03.patch, HDFS-9405.04.patch, HDFS-9405.05.patch, HDFS-9405.06.patch
>
>
> {{generateEncryptedDataEncryptionKey}} involves a non-trivial I/O operation 
> to the key provider, which could be slow or cause timeout. It should be done 
> as a separate thread so as to return a proper error message to the RPC caller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9949) Testcase for catching DN UUID regeneration regression

2016-03-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15192161#comment-15192161
 ] 

Hadoop QA commented on HDFS-9949:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 39s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 102m 56s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 23s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 230m 7s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.TestMissingBlocksAlert |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.TestDataTransferKeepalive |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode |
|   | 

[jira] [Updated] (HDFS-9950) TestDecommissioningStatus fails intermittently in trunk

2016-03-12 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9950:

Attachment: HDFS-9950.001.patch

Attach a initial patch, kindly review.

> TestDecommissioningStatus fails intermittently in trunk
> ---
>
> Key: HDFS-9950
> URL: https://issues.apache.org/jira/browse/HDFS-9950
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9950.001.patch
>
>
> I often found that the testcase {{TestDecommissioningStatus}} failed 
> sometimes. And I looked the test failed report, it always show these error 
> infos:
> {code}
> testDecommissionStatus(org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus)
>   Time elapsed: 0.462 sec  <<< FAILURE!
> java.lang.AssertionError: Unexpected num under-replicated blocks expected:<3> 
> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.checkDecommissionStatus(TestDecommissioningStatus.java:196)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionStatus(TestDecommissioningStatus.java:291)
> {code}
> And I know the reason is that the under-replicated num is not correct in 
> method checkDecommissionStatus of 
> {{TestDecommissioningStatus#testDecommissionStatus}}. 
> In this testcase, each datanode should have 4 blocks(2 for decommission.dat, 
> 2 for decommission.dat1)The expect num 3 on first node is because the 
> lastBlock of  uc blockCollection can not be replicated if its numlive just 
> more than blockManager minReplication(in this case is 1). And before decommed 
> second datanode, it has already one live replication for the uc 
> blockCollection' lastBlock in this node. 
> So in this failed case, the first node's under-replicat changes to 4 
> indicated that the uc blockCollection lastBlock's livenum is already 0 before 
> the second datanode decommed. So I think there are two possibilitys will lead 
> to it. 
> * The second datanode was already decommed before node one.
> * Creating file decommission.dat1 failed that lead that the second datanode 
> has no this block.
> And I read the code, it has checked the decommission-in-progress nodes here
> {code}
> if (iteration == 0) {
> assertEquals(decommissioningNodes.size(), 1);
> DatanodeDescriptor decommNode = decommissioningNodes.get(0);
> checkDecommissionStatus(decommNode, 3, 0, 1);
> checkDFSAdminDecommissionStatus(decommissioningNodes.subList(0, 1),
> fileSys, admin);
>   }
> {code}
> So it seems the second possibility are more likely the reason. And in 
> addition, it hasn't did a block number check when finished the creating file. 
> So we could do a check and retry operatons here if block number is not 
> correct as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9950) TestDecommissioningStatus fails intermittently in trunk

2016-03-12 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9950:

Status: Patch Available  (was: Open)

> TestDecommissioningStatus fails intermittently in trunk
> ---
>
> Key: HDFS-9950
> URL: https://issues.apache.org/jira/browse/HDFS-9950
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
>
> I often found that the testcase {{TestDecommissioningStatus}} failed 
> sometimes. And I looked the test failed report, it always show these error 
> infos:
> {code}
> testDecommissionStatus(org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus)
>   Time elapsed: 0.462 sec  <<< FAILURE!
> java.lang.AssertionError: Unexpected num under-replicated blocks expected:<3> 
> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.checkDecommissionStatus(TestDecommissioningStatus.java:196)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionStatus(TestDecommissioningStatus.java:291)
> {code}
> And I know the reason is that the under-replicated num is not correct in 
> method checkDecommissionStatus of 
> {{TestDecommissioningStatus#testDecommissionStatus}}. 
> In this testcase, each datanode should have 4 blocks(2 for decommission.dat, 
> 2 for decommission.dat1)The expect num 3 on first node is because the 
> lastBlock of  uc blockCollection can not be replicated if its numlive just 
> more than blockManager minReplication(in this case is 1). And before decommed 
> second datanode, it has already one live replication for the uc 
> blockCollection' lastBlock in this node. 
> So in this failed case, the first node's under-replicat changes to 4 
> indicated that the uc blockCollection lastBlock's livenum is already 0 before 
> the second datanode decommed. So I think there are two possibilitys will lead 
> to it. 
> * The second datanode was already decommed before node one.
> * Creating file decommission.dat1 failed that lead that the second datanode 
> has no this block.
> And I read the code, it has checked the decommission-in-progress nodes here
> {code}
> if (iteration == 0) {
> assertEquals(decommissioningNodes.size(), 1);
> DatanodeDescriptor decommNode = decommissioningNodes.get(0);
> checkDecommissionStatus(decommNode, 3, 0, 1);
> checkDFSAdminDecommissionStatus(decommissioningNodes.subList(0, 1),
> fileSys, admin);
>   }
> {code}
> So it seems the second possibility are more likely the reason. And in 
> addition, it hasn't did a block number check when finished the creating file. 
> So we could do a check and retry operatons here if block number is not 
> correct as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9950) TestDecommissioningStatus fails intermittently in trunk

2016-03-12 Thread Lin Yiqun (JIRA)
Lin Yiqun created HDFS-9950:
---

 Summary: TestDecommissioningStatus fails intermittently in trunk
 Key: HDFS-9950
 URL: https://issues.apache.org/jira/browse/HDFS-9950
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Lin Yiqun
Assignee: Lin Yiqun


I often found that the testcase {{TestDecommissioningStatus}} failed sometimes. 
And I looked the test failed report, it always show these error infos:
{code}
testDecommissionStatus(org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus)
  Time elapsed: 0.462 sec  <<< FAILURE!
java.lang.AssertionError: Unexpected num under-replicated blocks expected:<3> 
but was:<4>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.checkDecommissionStatus(TestDecommissioningStatus.java:196)
at 
org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus.testDecommissionStatus(TestDecommissioningStatus.java:291)
{code}
And I know the reason is that the under-replicated num is not correct in method 
checkDecommissionStatus of 
{{TestDecommissioningStatus#testDecommissionStatus}}. 

In this testcase, each datanode should have 4 blocks(2 for decommission.dat, 2 
for decommission.dat1)The expect num 3 on first node is because the lastBlock 
of  uc blockCollection can not be replicated if its numlive just more than 
blockManager minReplication(in this case is 1). And before decommed second 
datanode, it has already one live replication for the uc blockCollection' 
lastBlock in this node. 

So in this failed case, the first node's under-replicat changes to 4 indicated 
that the uc blockCollection lastBlock's livenum is already 0 before the second 
datanode decommed. So I think there are two possibilitys will lead to it. 

* The second datanode was already decommed before node one.
* Creating file decommission.dat1 failed that lead that the second datanode has 
no this block.

And I read the code, it has checked the decommission-in-progress nodes here
{code}
if (iteration == 0) {
assertEquals(decommissioningNodes.size(), 1);
DatanodeDescriptor decommNode = decommissioningNodes.get(0);
checkDecommissionStatus(decommNode, 3, 0, 1);
checkDFSAdminDecommissionStatus(decommissioningNodes.subList(0, 1),
fileSys, admin);
  }
{code}
So it seems the second possibility are more likely the reason. And in addition, 
it hasn't did a block number check when finished the creating file. So we could 
do a check and retry operatons here if block number is not correct as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9937) Update dfsadmin command line help

2016-03-12 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-9937:
-
Attachment: HDFS-9937.02.patch

> Update dfsadmin command line help
> -
>
> Key: HDFS-9937
> URL: https://issues.apache.org/jira/browse/HDFS-9937
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: commandline, supportability
> Attachments: HDFS-9937.01.patch, HDFS-9937.02.patch
>
>
> dfsadmin command line top-level help menu is not consistent with detailed 
> help menu.
> * -safemode missed options -wait and -forceExit 
> * -restoreFailedStorage options are not described consistently 
> (true/false/check, or Set/Unset/Check?)
> * -setSpaceQuota optionally takes a -storageType parameter, but it's not 
> clear what are the available options. (Seems to be (SSD, DISK, ARCHIVE), from 
> HdfsQuotaAdminGuide.html)
> * -reconfig seems to also take namenode as parameter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9005) Provide support for upgrade domain script

2016-03-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15192126#comment-15192126
 ] 

Hadoop QA commented on HDFS-9005:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 32s 
{color} | {color:red} hadoop-hdfs-project: patch generated 5 new + 437 
unchanged - 9 fixed = 442 total (was 446) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 39s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 23s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 42s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 31s 
{color} | {color:red} Patch generated 1 ASF License 

[jira] [Commented] (HDFS-9947) Block#toString should not output information from derived classes

2016-03-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15192091#comment-15192091
 ] 

Brahma Reddy Battula commented on HDFS-9947:


HDFS-9948 is dupe of this issue..

> Block#toString should not output information from derived classes
> -
>
> Key: HDFS-9947
> URL: https://issues.apache.org/jira/browse/HDFS-9947
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> {{Block#toString}} should not output information from derived classes.  
> Thanks for [~cnauroth] for spotting this bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9948) Block#toString should not output information from derived classes

2016-03-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15192090#comment-15192090
 ] 

Brahma Reddy Battula commented on HDFS-9948:


dupe of HDFS-9947 .?

> Block#toString should not output information from derived classes
> -
>
> Key: HDFS-9948
> URL: https://issues.apache.org/jira/browse/HDFS-9948
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> {{Block#toString}} should not output information from derived classes.  
> Thanks for [~cnauroth] for spotting this bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9947) Block#toString should not output information from derived classes

2016-03-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15192089#comment-15192089
 ] 

Brahma Reddy Battula commented on HDFS-9947:


dupe of HDFS-9947 .?

> Block#toString should not output information from derived classes
> -
>
> Key: HDFS-9947
> URL: https://issues.apache.org/jira/browse/HDFS-9947
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> {{Block#toString}} should not output information from derived classes.  
> Thanks for [~cnauroth] for spotting this bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9949) Testcase for catching DN UUID regeneration regression

2016-03-12 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-9949:
--
Target Version/s: 3.0.0, 2.8.0, 2.9.0
  Status: Patch Available  (was: Open)

> Testcase for catching DN UUID regeneration regression
> -
>
> Key: HDFS-9949
> URL: https://issues.apache.org/jira/browse/HDFS-9949
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HDFS-9949.000.branch-2.7.not-for-commit.patch, 
> HDFS-9949.000.patch
>
>
> In the following scenario, in releases without HDFS-8211, the DN may 
> regenerate its UUIDs unintentionally.
> 0. Consider a DN with two disks {{/data1/dfs/dn,/data2/dfs/dn}}
> 1. Stop DN
> 2. Unmount the second disk, {{/data2/dfs/dn}}
> 3. Create (in the scenario, this was an accident) /data2/dfs/dn on the root 
> path
> 4. Start DN
> 5. DN now considers /data2/dfs/dn empty so formats it, but during the format 
> it uses {{datanode.getDatanodeUuid()}} which is null until register() is 
> called.
> 6. As a result, after the directory loading, {{datanode.checkDatanodUuid()}} 
> gets called with successful condition, and it causes a new generation of UUID 
> which is written to all disks {{/data1/dfs/dn/current/VERSION}} and 
> {{/data2/dfs/dn/current/VERSION}}.
> 7. Stop DN (in the scenario, this was when the mistake of unmounted disk was 
> realised)
> 8. Mount the second disk back again {{/data2/dfs/dn}}, causing the 
> {{VERSION}} file to be the original one again on it (mounting masks the root 
> path that we last generated upon).
> 9. DN fails to start up cause it finds mismatched UUID between the two disks
> The DN should not generate a new UUID if one of the storage disks already 
> have the older one.
> HDFS-8211 unintentionally fixes this by changing the 
> {{datanode.getDatanodeUuid()}} function to rely on the {{DataStorage}} 
> representation of the UUID vs. the {{DatanodeID}} object which only gets 
> available (non-null) _after_ the registration.
> It'd still be good to add a direct test case to the above scenario that 
> passes on trunk and branch-2, but fails on branch-2.7 and lower, so we can 
> catch a regression around this in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9949) Testcase for catching DN UUID regeneration regression

2016-03-12 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-9949:
--
Attachment: (was: HDFS-9949.000.patch)

> Testcase for catching DN UUID regeneration regression
> -
>
> Key: HDFS-9949
> URL: https://issues.apache.org/jira/browse/HDFS-9949
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HDFS-9949.000.branch-2.7.not-for-commit.patch, 
> HDFS-9949.000.patch
>
>
> In the following scenario, in releases without HDFS-8211, the DN may 
> regenerate its UUIDs unintentionally.
> 0. Consider a DN with two disks {{/data1/dfs/dn,/data2/dfs/dn}}
> 1. Stop DN
> 2. Unmount the second disk, {{/data2/dfs/dn}}
> 3. Create (in the scenario, this was an accident) /data2/dfs/dn on the root 
> path
> 4. Start DN
> 5. DN now considers /data2/dfs/dn empty so formats it, but during the format 
> it uses {{datanode.getDatanodeUuid()}} which is null until register() is 
> called.
> 6. As a result, after the directory loading, {{datanode.checkDatanodUuid()}} 
> gets called with successful condition, and it causes a new generation of UUID 
> which is written to all disks {{/data1/dfs/dn/current/VERSION}} and 
> {{/data2/dfs/dn/current/VERSION}}.
> 7. Stop DN (in the scenario, this was when the mistake of unmounted disk was 
> realised)
> 8. Mount the second disk back again {{/data2/dfs/dn}}, causing the 
> {{VERSION}} file to be the original one again on it (mounting masks the root 
> path that we last generated upon).
> 9. DN fails to start up cause it finds mismatched UUID between the two disks
> The DN should not generate a new UUID if one of the storage disks already 
> have the older one.
> HDFS-8211 unintentionally fixes this by changing the 
> {{datanode.getDatanodeUuid()}} function to rely on the {{DataStorage}} 
> representation of the UUID vs. the {{DatanodeID}} object which only gets 
> available (non-null) _after_ the registration.
> It'd still be good to add a direct test case to the above scenario that 
> passes on trunk and branch-2, but fails on branch-2.7 and lower, so we can 
> catch a regression around this in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9949) Testcase for catching DN UUID regeneration regression

2016-03-12 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-9949:
--
Attachment: HDFS-9949.000.patch

> Testcase for catching DN UUID regeneration regression
> -
>
> Key: HDFS-9949
> URL: https://issues.apache.org/jira/browse/HDFS-9949
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HDFS-9949.000.branch-2.7.not-for-commit.patch, 
> HDFS-9949.000.patch
>
>
> In the following scenario, in releases without HDFS-8211, the DN may 
> regenerate its UUIDs unintentionally.
> 0. Consider a DN with two disks {{/data1/dfs/dn,/data2/dfs/dn}}
> 1. Stop DN
> 2. Unmount the second disk, {{/data2/dfs/dn}}
> 3. Create (in the scenario, this was an accident) /data2/dfs/dn on the root 
> path
> 4. Start DN
> 5. DN now considers /data2/dfs/dn empty so formats it, but during the format 
> it uses {{datanode.getDatanodeUuid()}} which is null until register() is 
> called.
> 6. As a result, after the directory loading, {{datanode.checkDatanodUuid()}} 
> gets called with successful condition, and it causes a new generation of UUID 
> which is written to all disks {{/data1/dfs/dn/current/VERSION}} and 
> {{/data2/dfs/dn/current/VERSION}}.
> 7. Stop DN (in the scenario, this was when the mistake of unmounted disk was 
> realised)
> 8. Mount the second disk back again {{/data2/dfs/dn}}, causing the 
> {{VERSION}} file to be the original one again on it (mounting masks the root 
> path that we last generated upon).
> 9. DN fails to start up cause it finds mismatched UUID between the two disks
> The DN should not generate a new UUID if one of the storage disks already 
> have the older one.
> HDFS-8211 unintentionally fixes this by changing the 
> {{datanode.getDatanodeUuid()}} function to rely on the {{DataStorage}} 
> representation of the UUID vs. the {{DatanodeID}} object which only gets 
> available (non-null) _after_ the registration.
> It'd still be good to add a direct test case to the above scenario that 
> passes on trunk and branch-2, but fails on branch-2.7 and lower, so we can 
> catch a regression around this in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9949) Testcase for catching DN UUID regeneration regression

2016-03-12 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-9949:
--
Attachment: HDFS-9949.000.branch-2.7.not-for-commit.patch
HDFS-9949.000.patch

> Testcase for catching DN UUID regeneration regression
> -
>
> Key: HDFS-9949
> URL: https://issues.apache.org/jira/browse/HDFS-9949
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HDFS-9949.000.branch-2.7.not-for-commit.patch, 
> HDFS-9949.000.patch
>
>
> In the following scenario, in releases without HDFS-8211, the DN may 
> regenerate its UUIDs unintentionally.
> 0. Consider a DN with two disks {{/data1/dfs/dn,/data2/dfs/dn}}
> 1. Stop DN
> 2. Unmount the second disk, {{/data2/dfs/dn}}
> 3. Create (in the scenario, this was an accident) /data2/dfs/dn on the root 
> path
> 4. Start DN
> 5. DN now considers /data2/dfs/dn empty so formats it, but during the format 
> it uses {{datanode.getDatanodeUuid()}} which is null until register() is 
> called.
> 6. As a result, after the directory loading, {{datanode.checkDatanodUuid()}} 
> gets called with successful condition, and it causes a new generation of UUID 
> which is written to all disks {{/data1/dfs/dn/current/VERSION}} and 
> {{/data2/dfs/dn/current/VERSION}}.
> 7. Stop DN (in the scenario, this was when the mistake of unmounted disk was 
> realised)
> 8. Mount the second disk back again {{/data2/dfs/dn}}, causing the 
> {{VERSION}} file to be the original one again on it (mounting masks the root 
> path that we last generated upon).
> 9. DN fails to start up cause it finds mismatched UUID between the two disks
> The DN should not generate a new UUID if one of the storage disks already 
> have the older one.
> HDFS-8211 unintentionally fixes this by changing the 
> {{datanode.getDatanodeUuid()}} function to rely on the {{DataStorage}} 
> representation of the UUID vs. the {{DatanodeID}} object which only gets 
> available (non-null) _after_ the registration.
> It'd still be good to add a direct test case to the above scenario that 
> passes on trunk and branch-2, but fails on branch-2.7 and lower, so we can 
> catch a regression around this in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9949) Testcase for catching DN UUID regeneration regression

2016-03-12 Thread Harsh J (JIRA)
Harsh J created HDFS-9949:
-

 Summary: Testcase for catching DN UUID regeneration regression
 Key: HDFS-9949
 URL: https://issues.apache.org/jira/browse/HDFS-9949
 Project: Hadoop HDFS
  Issue Type: Test
Affects Versions: 2.6.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor


In the following scenario, in releases without HDFS-8211, the DN may regenerate 
its UUIDs unintentionally.

0. Consider a DN with two disks {{/data1/dfs/dn,/data2/dfs/dn}}
1. Stop DN
2. Unmount the second disk, {{/data2/dfs/dn}}
3. Create (in the scenario, this was an accident) /data2/dfs/dn on the root path
4. Start DN
5. DN now considers /data2/dfs/dn empty so formats it, but during the format it 
uses {{datanode.getDatanodeUuid()}} which is null until register() is called.
6. As a result, after the directory loading, {{datanode.checkDatanodUuid()}} 
gets called with successful condition, and it causes a new generation of UUID 
which is written to all disks {{/data1/dfs/dn/current/VERSION}} and 
{{/data2/dfs/dn/current/VERSION}}.
7. Stop DN (in the scenario, this was when the mistake of unmounted disk was 
realised)
8. Mount the second disk back again {{/data2/dfs/dn}}, causing the {{VERSION}} 
file to be the original one again on it (mounting masks the root path that we 
last generated upon).
9. DN fails to start up cause it finds mismatched UUID between the two disks

The DN should not generate a new UUID if one of the storage disks already have 
the older one.

HDFS-8211 unintentionally fixes this by changing the 
{{datanode.getDatanodeUuid()}} function to rely on the {{DataStorage}} 
representation of the UUID vs. the {{DatanodeID}} object which only gets 
available (non-null) _after_ the registration.

It'd still be good to add a direct test case to the above scenario that passes 
on trunk and branch-2, but fails on branch-2.7 and lower, so we can catch a 
regression around this in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9005) Provide support for upgrade domain script

2016-03-12 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9005:
--
Attachment: HDFS-9005-3.patch

Failed unit tests aren't related. Here is the new patch to address findbugs, 
checkstyle, whitespace and javadoc issues.

> Provide support for upgrade domain script
> -
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9005-2.patch, HDFS-9005-3.patch, HDFS-9005.patch
>
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client

2016-03-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191996#comment-15191996
 ] 

stack commented on HDFS-3702:
-

bq. I am also curious about the answer to Devaraj's question. HDFS-2576 was 
added specifically for HBase. Can it address your use case? This avoids any 
changes to HDFS.

[~arpiagariu]
On the [~devaraj] question on why not HDFS-2576 comment from near on three 
years ago, the 'favored nodes' feature was never fully-plumbed in HBase so no 
one to my knowledge ever used it. While there are rumors that our brothers and 
sisters at Y! are in the process of reviving it, the original implementors of 
'favored nodes', FB, now consider it a 'mistake' [1]  and state they'll  
"...have a party when FB no longer has this operational nightmare. " Given this 
report, hbase community would be wary of going a 'favored nodes' route.

IIUC, to make use of it in this case, the 'client' would have to have a 
NN-like-awareness of cluster members and pick placement as the NN would 
excluding localhost? It seems like a lot to ask of the client/user of dfsclient.



1. 
https://issues.apache.org/jira/browse/HBASE-6721?focusedCommentId=14720273=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14720273

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -
>
> Key: HDFS-3702
> URL: https://issues.apache.org/jira/browse/HDFS-3702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.5.1
>Reporter: Nicolas Liochon
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702.008.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9405) When starting a file, NameNode should generate EDEK in a separate thread

2016-03-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191984#comment-15191984
 ] 

Hadoop QA commented on HDFS-9405:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 0s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
14s {color} | {color:green} root: patch generated 0 new + 199 unchanged - 1 
fixed = 199 total (was 200) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 28s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 34s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 2s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 43s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 246m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 

[jira] [Commented] (HDFS-9918) Erasure Coding: Sort located striped blocks based on decommissioned states

2016-03-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191884#comment-15191884
 ] 

Hadoop QA commented on HDFS-9918:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 10s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 40s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 193m 39s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestFileAppend |
|   | hadoop.hdfs.TestPersistBlocks |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Docker 

[jira] [Commented] (HDFS-9941) Do not log StandbyException on NN, other minor logging fixes

2016-03-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15191847#comment-15191847
 ] 

Hadoop QA commented on HDFS-9941:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 
83 unchanged - 1 fixed = 84 total (was 84) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 45s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 52s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 186m 36s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.TestDFSClientRetries |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
\\
\\
||