[jira] [Commented] (HDFS-11025) TestDiskspaceQuotaUpdate fails in trunk due to Bind exception

2016-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590879#comment-15590879
 ] 

Hudson commented on HDFS-11025:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10641 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10641/])
HDFS-11025. TestDiskspaceQuotaUpdate fails in trunk due to Bind (brahma: rev 
73504b1bdc4b93c64741de5eb9d022817fdfa22f)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java


> TestDiskspaceQuotaUpdate fails in trunk due to Bind exception
> -
>
> Key: HDFS-11025
> URL: https://issues.apache.org/jira/browse/HDFS-11025
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11025.001.patch
>
>
> The test {{TestDiskspaceQuotaUpdate}} fails sometimes after HDFS-10843, the 
> link addresse: 
> https://builds.apache.org/job/PreCommit-HDFS-Build/17200/testReport/. The 
> stack infos:
> {code} 
> java.net.BindException: Problem binding to [localhost:49195] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> {code} 
> I found the bind exception was happened in new test method 
> {{TestDiskspaceQuotaUpdate.testQuotaIssuesWhileCommitting}}. The related 
> codes:
> {code}
>   public void testQuotaIssuesWhileCommitting() throws Exception {
> ...
> try {
>   for (int i = REPLICATION - 1; i > 0; i--) {
> dnprops.add(cluster.stopDataNode(i));
>   }
>   ...
> } finally {
>   for (MiniDFSCluster.DataNodeProperties dnprop : dnprops) {
> cluster.restartDataNode(dnprop, true);
>   }
>   cluster.waitActive();
> }
>   }
> {code}
> I think we can make a simple fix in {{cluster.restartDataNode(dnprop, 
> true);}}. The tests in {{TestDiskspaceQuotaUpdate}} just care about that if 
> the cluster is up and running. So I think this change will not influence the 
> current logic,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11033) Add documents for native raw erasure coder in XOR codes

2016-10-19 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590875#comment-15590875
 ] 

SammiChen commented on HDFS-11033:
--

Thanks so much Wei-Chiu for reviewing the patch! I have uploaded a new patch 
address the issue. 

> Add documents for native raw erasure coder in XOR codes
> ---
>
> Key: HDFS-11033
> URL: https://issues.apache.org/jira/browse/HDFS-11033
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
> Attachments: HDFS-11033-v1.patch, HDFS-11033-v2.patch
>
>
> Add document for native raw erasure coder in XOR codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11033) Add documents for native raw erasure coder in XOR codes

2016-10-19 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-11033:
-
Attachment: HDFS-11033-v2.patch

> Add documents for native raw erasure coder in XOR codes
> ---
>
> Key: HDFS-11033
> URL: https://issues.apache.org/jira/browse/HDFS-11033
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
> Attachments: HDFS-11033-v1.patch, HDFS-11033-v2.patch
>
>
> Add document for native raw erasure coder in XOR codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9482) Replace DatanodeInfo constructors with a builder pattern

2016-10-19 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590872#comment-15590872
 ] 

Arpit Agarwal commented on HDFS-9482:
-

Thank you for contributing this improvement [~brahmareddy]. A few comments:
# Fields {{from}} and {{nodeID}} in DatanodeInfoBuilder should be removed. They 
seem to be unused.
# DataNodeInfo has a number of unused constructors now. We should probably 
remove them.
# DatanodeInfoBuilder can be a static nested class of DatanodeInfo.
# Generally when we use a builder pattern, the constructor is made private to 
enforce construction via the builder. Is it possible to do that here?

Looks good otherwise.

> Replace DatanodeInfo constructors with a builder pattern
> 
>
> Key: HDFS-9482
> URL: https://issues.apache.org/jira/browse/HDFS-9482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9482-002.patch, HDFS-9482.patch
>
>
> As per  [~arpitagarwal] comment 
> [here|https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761],Replace
>  DatanodeInfo constructors with a builder pattern, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590870#comment-15590870
 ] 

Hadoop QA commented on HDFS-10730:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 
50s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10730 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834316/HDFS-10730.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b4a4b6580373 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8650cc8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17233/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17233/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix some failed tests due to BindException
> --
>
> Key: HDFS-10730
> URL: https://issues.apache.org/jira/browse/HDFS-10730
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10730.001.patch, HDFS-10730.002.patch
>
>
> In HDFS-10723, [~kihwal] suggested that 
> {quote}
> it is not a good idea to hard-code or reuse the same port number in unit 
> tests. Because the jenkins slave can run multiple jobs at the same time.
> {quote}
> Then I collected some tests 

[jira] [Commented] (HDFS-10699) Log object instance get incorrectly in TestDFSAdmin

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590847#comment-15590847
 ] 

Hadoop QA commented on HDFS-10699:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10699 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820627/HDFS-10699.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0faa681be3ef 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8650cc8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17232/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17232/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17232/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Log object instance get incorrectly in TestDFSAdmin
> ---
>
> Key: HDFS-10699
> URL: https://issues.apache.org/jira/browse/HDFS-10699
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: 

[jira] [Updated] (HDFS-8410) Add computation time metrics to datanode for ECWorker

2016-10-19 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-8410:

Attachment: HDFS-8410-005.patch

1. add "final" to metric variable
2. the previous checkstyle issue is about "the variable should has get/set 
function". Leave it alone. 

> Add computation time metrics to datanode for ECWorker
> -
>
> Key: HDFS-8410
> URL: https://issues.apache.org/jira/browse/HDFS-8410
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8410-001.patch, HDFS-8410-002.patch, 
> HDFS-8410-003.patch, HDFS-8410-004.patch, HDFS-8410-005.patch
>
>
> This is a sub task of HDFS-7674. It adds time metric for ec decode work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11025) TestDiskspaceQuotaUpdate fails in trunk due to Bind exception

2016-10-19 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-11025:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk,branch-2 and branch-2.8..[~linyiqun] thanks for your 
contribution and thanks to [~ebadger] and [~xkrogen] for additional review.

> TestDiskspaceQuotaUpdate fails in trunk due to Bind exception
> -
>
> Key: HDFS-11025
> URL: https://issues.apache.org/jira/browse/HDFS-11025
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11025.001.patch
>
>
> The test {{TestDiskspaceQuotaUpdate}} fails sometimes after HDFS-10843, the 
> link addresse: 
> https://builds.apache.org/job/PreCommit-HDFS-Build/17200/testReport/. The 
> stack infos:
> {code} 
> java.net.BindException: Problem binding to [localhost:49195] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> {code} 
> I found the bind exception was happened in new test method 
> {{TestDiskspaceQuotaUpdate.testQuotaIssuesWhileCommitting}}. The related 
> codes:
> {code}
>   public void testQuotaIssuesWhileCommitting() throws Exception {
> ...
> try {
>   for (int i = REPLICATION - 1; i > 0; i--) {
> dnprops.add(cluster.stopDataNode(i));
>   }
>   ...
> } finally {
>   for (MiniDFSCluster.DataNodeProperties dnprop : dnprops) {
> cluster.restartDataNode(dnprop, true);
>   }
>   cluster.waitActive();
> }
>   }
> {code}
> I think we can make a simple fix in {{cluster.restartDataNode(dnprop, 
> true);}}. The tests in {{TestDiskspaceQuotaUpdate}} just care about that if 
> the cluster is up and running. So I think this change will not influence the 
> current logic,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats

2016-10-19 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590835#comment-15590835
 ] 

Brahma Reddy Battula commented on HDFS-9480:


[~arpitagarwal] thanks a lot for review..Will commit today.

>  Expose nonDfsUsed via StorageTypeStats 
> 
>
> Key: HDFS-9480
> URL: https://issues.apache.org/jira/browse/HDFS-9480
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9480-002.patch, HDFS-9480.patch
>
>
>  Expose nonDfsUsed via StorageTypeStats..See the comment [here | 
> https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761]
>  from arpit. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590826#comment-15590826
 ] 

Hadoop QA commented on HDFS-10730:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10730 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822562/HDFS-10730.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3390c5c44d77 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8650cc8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17231/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17231/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17231/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix some failed tests due to BindException
> --
>
> Key: HDFS-10730
> URL: https://issues.apache.org/jira/browse/HDFS-10730
> Project: Hadoop HDFS
>  Issue Type: 

[jira] [Commented] (HDFS-9480) Expose nonDfsUsed via StorageTypeStats

2016-10-19 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590822#comment-15590822
 ] 

Arpit Agarwal commented on HDFS-9480:
-

+1 thanks [~brahmareddy]!

>  Expose nonDfsUsed via StorageTypeStats 
> 
>
> Key: HDFS-9480
> URL: https://issues.apache.org/jira/browse/HDFS-9480
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-9480-002.patch, HDFS-9480.patch
>
>
>  Expose nonDfsUsed via StorageTypeStats..See the comment [here | 
> https://issues.apache.org/jira/browse/HDFS-9038?focusedCommentId=15018761=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15018761]
>  from arpit. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8410) Add computation time metrics to datanode for ECWorker

2016-10-19 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590805#comment-15590805
 ] 

SammiChen commented on HDFS-8410:
-

It costs about 2 milliseconds in my desktop to decode a strip group with 6 
blocks, each block is 64k. This decoding time merely depends on how fast the 
CPU is.  It's "Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz" with 4  cores in my 
desktop, not the leading edge CPU model. Given that CPU is becoming more and 
more powerful, I think it is not safe to use millisecond granularity to record 
one time decoding time. We can choose between nanosecond or microsecond. I 
would prefer nanosecond for one reason. it can be directly get through 
{{System.nanoTime()}}. If microsecond is used, there is one extra division of 
1000. That's not good from performance point of view. And a {{long}} number can 
host nanoseconds, representing hundreds of years. So there is not going to a 
overflow quickly. 


> Add computation time metrics to datanode for ECWorker
> -
>
> Key: HDFS-8410
> URL: https://issues.apache.org/jira/browse/HDFS-8410
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8410-001.patch, HDFS-8410-002.patch, 
> HDFS-8410-003.patch, HDFS-8410-004.patch
>
>
> This is a sub task of HDFS-7674. It adds time metric for ec decode work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10730) Fix some failed tests due to BindException

2016-10-19 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10730:
-
Attachment: HDFS-10730.002.patch

Thanks [~brahmareddy] for the comments.
{quote}
As,by default is false,How about change like below which is similar to 
HDFS-11025..?
{quote}
Done. Attach a new patch.

> Fix some failed tests due to BindException
> --
>
> Key: HDFS-10730
> URL: https://issues.apache.org/jira/browse/HDFS-10730
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10730.001.patch, HDFS-10730.002.patch
>
>
> In HDFS-10723, [~kihwal] suggested that 
> {quote}
> it is not a good idea to hard-code or reuse the same port number in unit 
> tests. Because the jenkins slave can run multiple jobs at the same time.
> {quote}
> Then I collected some tests which failed by this reason in recent jenkin 
> buildings.
> Finally I found these two failed test 
> {{TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16301/testReport/)
>  and 
> {{TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16257/testReport/).
> The stack infos:
> {code}
> java.net.BindException: Problem binding to [localhost:57241] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:538)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:811)
>   at org.apache.hadoop.ipc.Server.(Server.java:2611)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:562)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:537)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:953)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1361)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2298)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2278)
>   at 
> org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:482)
>   at 
> org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1(TestFileChecksum.java:182)
> {code}
> {code}
> java.net.BindException: Problem binding to [localhost:54191] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:519)
>   at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:52)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1082)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1348)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259)
>   at 
> 

[jira] [Commented] (HDFS-10699) Log object instance get incorrectly in TestDFSAdmin

2016-10-19 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590688#comment-15590688
 ] 

Brahma Reddy Battula commented on HDFS-10699:
-

Nice Catch..LGTM, will commit.

> Log object instance get incorrectly in TestDFSAdmin
> ---
>
> Key: HDFS-10699
> URL: https://issues.apache.org/jira/browse/HDFS-10699
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10699.001.patch
>
>
> In class TestDFSAdmin, it gets a incorrect object instance. The codes:
> {code}
>  public class TestDFSAdmin {
>private static final Log LOG = LogFactory.getLog(DFSAdmin.class);
>private Configuration conf = null;
>private MiniDFSCluster cluster;
>private DFSAdmin admin;
>...
> {code}
> Here the class name {{DFSAdmin}} should be {{TestDFSAdmin}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10730) Fix some failed tests due to BindException

2016-10-19 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590665#comment-15590665
 ] 

Brahma Reddy Battula edited comment on HDFS-10730 at 10/20/16 3:49 AM:
---

As,by default is {{false}},How about change like below which is similar to 
HDFS-11025..?
{{cluster.restartDataNode(dnprop);}}


was (Author: brahmareddy):
How about change like below which is similar to HDFS-11025..?
{{cluster.restartDataNode(dnprop);}}

> Fix some failed tests due to BindException
> --
>
> Key: HDFS-10730
> URL: https://issues.apache.org/jira/browse/HDFS-10730
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10730.001.patch
>
>
> In HDFS-10723, [~kihwal] suggested that 
> {quote}
> it is not a good idea to hard-code or reuse the same port number in unit 
> tests. Because the jenkins slave can run multiple jobs at the same time.
> {quote}
> Then I collected some tests which failed by this reason in recent jenkin 
> buildings.
> Finally I found these two failed test 
> {{TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16301/testReport/)
>  and 
> {{TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16257/testReport/).
> The stack infos:
> {code}
> java.net.BindException: Problem binding to [localhost:57241] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:538)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:811)
>   at org.apache.hadoop.ipc.Server.(Server.java:2611)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:562)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:537)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:953)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1361)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2298)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2278)
>   at 
> org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:482)
>   at 
> org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1(TestFileChecksum.java:182)
> {code}
> {code}
> java.net.BindException: Problem binding to [localhost:54191] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:519)
>   at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:52)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1082)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1348)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593)
>   at 
> 

[jira] [Comment Edited] (HDFS-10730) Fix some failed tests due to BindException

2016-10-19 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590665#comment-15590665
 ] 

Brahma Reddy Battula edited comment on HDFS-10730 at 10/20/16 3:44 AM:
---

How about change like below which is similar to HDFS-11025..?
{{cluster.restartDataNode(dnprop);}}


was (Author: brahmareddy):
{{cluster.restartDataNode(dnprop);}}

> Fix some failed tests due to BindException
> --
>
> Key: HDFS-10730
> URL: https://issues.apache.org/jira/browse/HDFS-10730
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10730.001.patch
>
>
> In HDFS-10723, [~kihwal] suggested that 
> {quote}
> it is not a good idea to hard-code or reuse the same port number in unit 
> tests. Because the jenkins slave can run multiple jobs at the same time.
> {quote}
> Then I collected some tests which failed by this reason in recent jenkin 
> buildings.
> Finally I found these two failed test 
> {{TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16301/testReport/)
>  and 
> {{TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16257/testReport/).
> The stack infos:
> {code}
> java.net.BindException: Problem binding to [localhost:57241] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:538)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:811)
>   at org.apache.hadoop.ipc.Server.(Server.java:2611)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:562)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:537)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:953)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1361)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2298)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2278)
>   at 
> org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:482)
>   at 
> org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1(TestFileChecksum.java:182)
> {code}
> {code}
> java.net.BindException: Problem binding to [localhost:54191] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:519)
>   at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:52)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1082)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1348)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259)
>   at 
> 

[jira] [Commented] (HDFS-10730) Fix some failed tests due to BindException

2016-10-19 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590665#comment-15590665
 ] 

Brahma Reddy Battula commented on HDFS-10730:
-

{{cluster.restartDataNode(dnprop);}}

> Fix some failed tests due to BindException
> --
>
> Key: HDFS-10730
> URL: https://issues.apache.org/jira/browse/HDFS-10730
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10730.001.patch
>
>
> In HDFS-10723, [~kihwal] suggested that 
> {quote}
> it is not a good idea to hard-code or reuse the same port number in unit 
> tests. Because the jenkins slave can run multiple jobs at the same time.
> {quote}
> Then I collected some tests which failed by this reason in recent jenkin 
> buildings.
> Finally I found these two failed test 
> {{TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16301/testReport/)
>  and 
> {{TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup}}(https://builds.apache.org/job/PreCommit-HDFS-Build/16257/testReport/).
> The stack infos:
> {code}
> java.net.BindException: Problem binding to [localhost:57241] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:538)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:811)
>   at org.apache.hadoop.ipc.Server.(Server.java:2611)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:562)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:537)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:953)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1361)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2298)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2278)
>   at 
> org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:482)
>   at 
> org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocks1(TestFileChecksum.java:182)
> {code}
> {code}
> java.net.BindException: Problem binding to [localhost:54191] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:530)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:519)
>   at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:52)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1082)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1348)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:488)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2658)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2546)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2593)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2259)
>   at 
> org.apache.hadoop.hdfs.TestDecommissionWithStriped.testDecommissionWithURBlockForSameBlockGroup(TestDecommissionWithStriped.java:255)
> {code}
> We can make a change to update the param value for {{keepPort}} from
> {code}
> 

[jira] [Commented] (HDFS-11025) TestDiskspaceQuotaUpdate fails in trunk due to Bind exception

2016-10-19 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590663#comment-15590663
 ] 

Brahma Reddy Battula commented on HDFS-11025:
-

LGTM, will commit

> TestDiskspaceQuotaUpdate fails in trunk due to Bind exception
> -
>
> Key: HDFS-11025
> URL: https://issues.apache.org/jira/browse/HDFS-11025
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11025.001.patch
>
>
> The test {{TestDiskspaceQuotaUpdate}} fails sometimes after HDFS-10843, the 
> link addresse: 
> https://builds.apache.org/job/PreCommit-HDFS-Build/17200/testReport/. The 
> stack infos:
> {code} 
> java.net.BindException: Problem binding to [localhost:49195] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> {code} 
> I found the bind exception was happened in new test method 
> {{TestDiskspaceQuotaUpdate.testQuotaIssuesWhileCommitting}}. The related 
> codes:
> {code}
>   public void testQuotaIssuesWhileCommitting() throws Exception {
> ...
> try {
>   for (int i = REPLICATION - 1; i > 0; i--) {
> dnprops.add(cluster.stopDataNode(i));
>   }
>   ...
> } finally {
>   for (MiniDFSCluster.DataNodeProperties dnprop : dnprops) {
> cluster.restartDataNode(dnprop, true);
>   }
>   cluster.waitActive();
> }
>   }
> {code}
> I think we can make a simple fix in {{cluster.restartDataNode(dnprop, 
> true);}}. The tests in {{TestDiskspaceQuotaUpdate}} just care about that if 
> the cluster is up and running. So I think this change will not influence the 
> current logic,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8410) Add computation time metrics to datanode for ECWorker

2016-10-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590635#comment-15590635
 ] 

Andrew Wang commented on HDFS-8410:
---

Good point. Do you have a sense of the order of magnitude of these values? Most 
of the other DatanodeMetrics use milliseconds, so that'd be good for 
consistency.

> Add computation time metrics to datanode for ECWorker
> -
>
> Key: HDFS-8410
> URL: https://issues.apache.org/jira/browse/HDFS-8410
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8410-001.patch, HDFS-8410-002.patch, 
> HDFS-8410-003.patch, HDFS-8410-004.patch
>
>
> This is a sub task of HDFS-7674. It adds time metric for ec decode work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10905) Refactor DataStreamer#createBlockOutputStream

2016-10-19 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590595#comment-15590595
 ] 

Yuanbo Liu commented on HDFS-10905:
---

I have a little concern about the test cases. Jenkins only ran 
hadoop-hdfs-client's test cases and it seems not enough to cover the code 
change.

> Refactor DataStreamer#createBlockOutputStream
> -
>
> Key: HDFS-10905
> URL: https://issues.apache.org/jira/browse/HDFS-10905
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HDFS-10905.001.patch, HDFS-10905.002.patch
>
>
> DataStreamer#createBlockOutputStream and DataStreamer#transfer shared much 
> boilerplate code. HDFS-10609 refactored the transfer method into a 
> StreamerStreams class. The createBlockOutputStream method should reuse the 
> class to de-dup code and to improve code clarity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8410) Add computation time metrics to datanode for ECWorker

2016-10-19 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590592#comment-15590592
 ] 

SammiChen commented on HDFS-8410:
-

Hi Andrew, thanks for the comments. Yea, It's better to use "final" for the new 
variable. Regarding using {{Time.monotonicNow()}} function, it seems this 
function returns the millisecond value instead of nanoseconds value. Given this 
metric is called "Nanoseconds spent by decoding tasks", I'm not sure if you are 
suggesting change this metric definition to "Milliseconds" something. I'd like 
to known your opinion. 

> Add computation time metrics to datanode for ECWorker
> -
>
> Key: HDFS-8410
> URL: https://issues.apache.org/jira/browse/HDFS-8410
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8410-001.patch, HDFS-8410-002.patch, 
> HDFS-8410-003.patch, HDFS-8410-004.patch
>
>
> This is a sub task of HDFS-7674. It adds time metric for ec decode work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9820) Improve distcp to support efficient restore to an earlier snapshot

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590514#comment-15590514
 ] 

Hadoop QA commented on HDFS-9820:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 3s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
27 new + 208 unchanged - 12 fixed = 235 total (was 220) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
34s{color} | {color:green} hadoop-distcp in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HDFS-9820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834294/HDFS-9820.branch-2.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5032699e3769 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / a3cbaf0 |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 

[jira] [Commented] (HDFS-9820) Improve distcp to support efficient restore to an earlier snapshot

2016-10-19 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590464#comment-15590464
 ] 

Yongjun Zhang commented on HDFS-9820:
-

Many thanks again [~andrew.wang], I have committed to trunk.

Revised branch-2 patch accordingly with the same change, and uploaded 
HDFS-9820.branch-2.002.patch, would you please help taking a look? 



> Improve distcp to support efficient restore to an earlier snapshot
> --
>
> Key: HDFS-9820
> URL: https://issues.apache.org/jira/browse/HDFS-9820
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.6.4
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-9820.001.patch, HDFS-9820.002.patch, 
> HDFS-9820.003.patch, HDFS-9820.004.patch, HDFS-9820.005.patch, 
> HDFS-9820.006.patch, HDFS-9820.007.patch, HDFS-9820.008.patch, 
> HDFS-9820.009.patch, HDFS-9820.branch-2.002.patch, HDFS-9820.branch-2.patch
>
>
> A common use scenario (scenaio 1): 
> # create snapshot sx in clusterX, 
> # do some experiemnts in clusterX, which creates some files. 
> # throw away the files changed and go back to sx.
> Another scenario (scenario 2) is, there is a production cluster and a backup 
> cluster, we periodically sync up the data from production cluster to the 
> backup cluster with distcp. 
> The cluster in scenario 1 could be the backup cluster in scenario 2.
> For scenario 1:
> HDFS-4167 intends to restore HDFS to the most recent snapshot, and there are 
> some complexity and challenges.  Before that jira is implemented, we count on 
> distcp to copy from snapshot to the current state. However, the performance 
> of this operation could be very bad because we have to go through all files 
> even if we only changed a few files.
> For scenario 2:
> HDFS-7535 improved distcp performance by avoiding copying files that changed 
> name since last backup.
> On top of HDFS-7535, HDFS-8828 improved distcp performance when copying data 
> from source to target cluster, by only copying changed files since last 
> backup. The way it works is use snapshot diff to find out all files changed, 
> and copy the changed files only.
> See 
> https://blog.cloudera.com/blog/2015/12/distcp-performance-improvements-in-apache-hadoop/
> This jira is to propose a variation of HDFS-8828, to find out the files 
> changed in target cluster since last snapshot sx, and copy these from 
> snapshot sx of either the source or the target cluster, to restore target 
> cluster's current state to sx. 
> Specifically,
> If a file/dir is
> - renamed, rename it back
> - created in target cluster, delete it
> - modified, put it to the copy list
> - run distcp with the copy list, copy from the source cluster's corresponding 
> snapshot
> This could be a new command line switch -rdiff in distcp.
> As a native restore feature, HDFS-4167 would still be ideal to have. However, 
>  HDFS-9820 would hopefully be easier to implement, before HDFS-4167 is in 
> place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9820) Improve distcp to support efficient restore to an earlier snapshot

2016-10-19 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9820:

Attachment: HDFS-9820.branch-2.002.patch

> Improve distcp to support efficient restore to an earlier snapshot
> --
>
> Key: HDFS-9820
> URL: https://issues.apache.org/jira/browse/HDFS-9820
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.6.4
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-9820.001.patch, HDFS-9820.002.patch, 
> HDFS-9820.003.patch, HDFS-9820.004.patch, HDFS-9820.005.patch, 
> HDFS-9820.006.patch, HDFS-9820.007.patch, HDFS-9820.008.patch, 
> HDFS-9820.009.patch, HDFS-9820.branch-2.002.patch, HDFS-9820.branch-2.patch
>
>
> A common use scenario (scenaio 1): 
> # create snapshot sx in clusterX, 
> # do some experiemnts in clusterX, which creates some files. 
> # throw away the files changed and go back to sx.
> Another scenario (scenario 2) is, there is a production cluster and a backup 
> cluster, we periodically sync up the data from production cluster to the 
> backup cluster with distcp. 
> The cluster in scenario 1 could be the backup cluster in scenario 2.
> For scenario 1:
> HDFS-4167 intends to restore HDFS to the most recent snapshot, and there are 
> some complexity and challenges.  Before that jira is implemented, we count on 
> distcp to copy from snapshot to the current state. However, the performance 
> of this operation could be very bad because we have to go through all files 
> even if we only changed a few files.
> For scenario 2:
> HDFS-7535 improved distcp performance by avoiding copying files that changed 
> name since last backup.
> On top of HDFS-7535, HDFS-8828 improved distcp performance when copying data 
> from source to target cluster, by only copying changed files since last 
> backup. The way it works is use snapshot diff to find out all files changed, 
> and copy the changed files only.
> See 
> https://blog.cloudera.com/blog/2015/12/distcp-performance-improvements-in-apache-hadoop/
> This jira is to propose a variation of HDFS-8828, to find out the files 
> changed in target cluster since last snapshot sx, and copy these from 
> snapshot sx of either the source or the target cluster, to restore target 
> cluster's current state to sx. 
> Specifically,
> If a file/dir is
> - renamed, rename it back
> - created in target cluster, delete it
> - modified, put it to the copy list
> - run distcp with the copy list, copy from the source cluster's corresponding 
> snapshot
> This could be a new command line switch -rdiff in distcp.
> As a native restore feature, HDFS-4167 would still be ideal to have. However, 
>  HDFS-9820 would hopefully be easier to implement, before HDFS-4167 is in 
> place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10976) Fsck should mark EC files explicitly

2016-10-19 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590457#comment-15590457
 ] 

Takanobu Asanuma commented on HDFS-10976:
-

+1 (non-binding). Thanks again.

> Fsck should mark EC files explicitly
> 
>
> Key: HDFS-10976
> URL: https://issues.apache.org/jira/browse/HDFS-10976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HDFS-10976.001.patch, HDFS-10976.002.patch
>
>
> Currently fsck reports corrupt EC files as follows: it does not distinguish 
> erasure coded files versus replicated files. In addition, it would be nice to 
> print out the ec policy of the corrupt EC file.
> {quote}
> /striped 
> /striped/corrupted 393216 bytes, 1 block(s): 
> /striped/corrupted: CORRUPT blockpool BP-1564681138-127.0.0.1-1475793860787 
> block blk_-9223372036854775792
>  Under replicated 
> BP-1564681138-127.0.0.1-1475793860787:blk_-9223372036854775792_1001. Target 
> Replicas is 9 but found 5 live replica(s), 0 decommissioned replica(s) and 0 
> decommissioning replica(s).
>  CORRUPT 1 blocks of total size 393216 B
> 0. BP-1564681138-127.0.0.1-1475793860787:blk_-9223372036854775792_1001 
> len=393216 Live_repl=5  
> [DatanodeInfoWithStorage[127.0.0.1:62192,DS-81dcbc38-755e-446e-a028-71a79e4de6d9,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62180,DS-98fe193d-6342-4b2c-ad61-4586e2530b1e,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62167,DS-53031f88-0c63-4839-ab18-efea2f1bb063,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62162,DS-e8b418fd-165d-4d6f-886b-f75c21be096d,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62176,DS-03e51584-5b33-4bb6-89b5-f519cda57429,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62158,DS-ce9ca7b3-5b00-4351-8537-822eed532b46,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62184,DS-668c500e-eb3d-4d40-b900-814076d5e160,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62171,DS-763a3961-b214-4601-81c0-abdaecf539c4,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62188,DS-d4ea6399-bead-452e-8dd5-bc8c5ebd4f45,DISK](LIVE)]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10975) fsck -list-corruptfileblocks does not report corrupt EC files

2016-10-19 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590450#comment-15590450
 ] 

Takanobu Asanuma commented on HDFS-10975:
-

I agree that it is counter-intuitive. But as [~andrew.wang] commented 
[here|https://issues.apache.org/jira/browse/HDFS-10999?focusedCommentId=15590122],
 I also think this specification is required. It would be good if we add add 
more documents about fsck.

Thanks for the verification, but I could not reproduce it. The latest patch 
includes the test in {{testFsckCorruptECFile}}. Could you check it again?

> fsck -list-corruptfileblocks does not report corrupt EC files
> -
>
> Key: HDFS-10975
> URL: https://issues.apache.org/jira/browse/HDFS-10975
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Takanobu Asanuma
> Attachments: HDFS-10975.1.patch
>
>
> HDFS-10826 fix fsck for corrupt EC files if no parameters are specified.
> However, if I change the test case added in HDFS-10826 
> (TestFsck#testFsckCorruptECFile) to run "fsck -list-corruptfileblocks", the 
> same test test failed because fsck reports no corrupt files. 
> Interestingly, if I run "fsck -files -blocks -replicaDetails" then the test 
> passed and shows the corrupt file.
> Need to fix the discrepancy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9820) Improve distcp to support efficient restore to an earlier snapshot

2016-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590412#comment-15590412
 ] 

Hudson commented on HDFS-9820:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10640 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10640/])
HDFS-9820. Improve distcp to support efficient restore to an earlier (yzhang: 
rev 8650cc84f20e7d8c32dcdcd91c94372d476e2276)
* (add) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSyncReverseFromSource.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DiffInfo.java
* (add) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSyncReverseBase.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSync.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* (add) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSyncReverseFromTarget.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java


> Improve distcp to support efficient restore to an earlier snapshot
> --
>
> Key: HDFS-9820
> URL: https://issues.apache.org/jira/browse/HDFS-9820
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.6.4
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-9820.001.patch, HDFS-9820.002.patch, 
> HDFS-9820.003.patch, HDFS-9820.004.patch, HDFS-9820.005.patch, 
> HDFS-9820.006.patch, HDFS-9820.007.patch, HDFS-9820.008.patch, 
> HDFS-9820.009.patch, HDFS-9820.branch-2.patch
>
>
> A common use scenario (scenaio 1): 
> # create snapshot sx in clusterX, 
> # do some experiemnts in clusterX, which creates some files. 
> # throw away the files changed and go back to sx.
> Another scenario (scenario 2) is, there is a production cluster and a backup 
> cluster, we periodically sync up the data from production cluster to the 
> backup cluster with distcp. 
> The cluster in scenario 1 could be the backup cluster in scenario 2.
> For scenario 1:
> HDFS-4167 intends to restore HDFS to the most recent snapshot, and there are 
> some complexity and challenges.  Before that jira is implemented, we count on 
> distcp to copy from snapshot to the current state. However, the performance 
> of this operation could be very bad because we have to go through all files 
> even if we only changed a few files.
> For scenario 2:
> HDFS-7535 improved distcp performance by avoiding copying files that changed 
> name since last backup.
> On top of HDFS-7535, HDFS-8828 improved distcp performance when copying data 
> from source to target cluster, by only copying changed files since last 
> backup. The way it works is use snapshot diff to find out all files changed, 
> and copy the changed files only.
> See 
> https://blog.cloudera.com/blog/2015/12/distcp-performance-improvements-in-apache-hadoop/
> This jira is to propose a variation of HDFS-8828, to find out the files 
> changed in target cluster since last snapshot sx, and copy these from 
> snapshot sx of either the source or the target cluster, to restore target 
> cluster's current state to sx. 
> Specifically,
> If a file/dir is
> - renamed, rename it back
> - created in target cluster, delete it
> - modified, put it to the copy list
> - run distcp with the copy list, copy from the source cluster's corresponding 
> snapshot
> This could be a new command line switch -rdiff in distcp.
> As a native restore feature, HDFS-4167 would still be ideal to have. However, 
>  HDFS-9820 would hopefully be easier to implement, before HDFS-4167 is in 
> place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11039) Expose more configuration properties to hdfs-default.xml

2016-10-19 Thread Yi Liu (JIRA)
Yi Liu created HDFS-11039:
-

 Summary: Expose more configuration properties to hdfs-default.xml
 Key: HDFS-11039
 URL: https://issues.apache.org/jira/browse/HDFS-11039
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation, newbie
Reporter: Yi Liu
Assignee: Jennica Pounds
Priority: Minor


There are some configuration properties for hdfs, but have not been exposed in 
hdfs-default.xml.

It's convenient for Hadoop user/admin if we add them in the hdfs-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10752) Several log refactoring/improvement suggestion in HDFS

2016-10-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590371#comment-15590371
 ] 

Hudson commented on HDFS-10752:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10639 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10639/])
HDFS-10752. Several log refactoring/improvement suggestion in HDFS. (arp: rev 
b4564103e4709caa1135f6ccc2864d90e54f2ac9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtxCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java


> Several log refactoring/improvement suggestion in HDFS
> --
>
> Key: HDFS-10752
> URL: https://issues.apache.org/jira/browse/HDFS-10752
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Nemo Chen
>Assignee: Hanisha Koneru
>  Labels: easyfix, easytest
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10752.000.patch
>
>
> As per conversation with [~vrushalic], we merged HDFS-10749, HDFS-10750, 
> HDFS-10751, HDFS-10753,  under this issue.
> 
> HDFS-10749
> *Method invocation in logs can be replaced by variable*
> Similar to the fix for HDFS-409. In file:
> hadoop-rel-release-2.7.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
> In code block:
> {code:borderStyle=solid}
> lastQueuedSeqno = currentPacket.getSeqno();
> if (DFSClient.LOG.isDebugEnabled()) {
> DFSClient.LOG.debug("Queued packet " + currentPacket.getSeqno());
> }
> {code}
> currentPacket.getSeqno() is better to be replaced by variable lastQueuedSeqno.
> 
> HDFS-10750
> Similar to the fix for AVRO-115. In file:
> hadoop-rel-release-2.7.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
> in line 695, the logging code:
> {code:borderStyle=solid}
> LOG.info(getRole() + " RPC up at: " + rpcServer.getRpcAddress());
> {code}
> In the same class, there is a method in line 907:
> {code:borderStyle=solid}
>   /**
>* @return NameNode RPC address
>*/
>   public InetSocketAddress getNameNodeAddress() {
> return rpcServer.getRpcAddress();
>   }
> {code}
> We can tell that rpcServer.getRpcAddress() could be replaced by  method 
> getNameNodeAddress() for the case of readability and simplicity
> 
> HDFS-10751
> Similar to the fix for AVRO-115. In file:
> hadoop-rel-release-2.7.2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtxCache.java
> in line 72, the logging code:
> {code:borderStyle=solid}
> LOG.trace("openFileMap size:" + openFileMap.size());
> {code}
> In the same class, there is a method in line 189:
> {code:borderStyle=solid}
>   int size() {
> return openFileMap.size();
>   }
> {code}
> We can tell that openFileMap.size() could be replaced by  method size() for 
> the case of readability and simplicity
> 
> *Print variable in byte*
> Similar to the fix for HBASE-623, in file:
> hadoop-rel-release-2.7.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java
> In the following method, the log printed variable data (in byte[]). A 
> possible fix is add Bytes.toString(data).
> {code}
> /**
>* Write the batch of edits to the local copy of the edit logs.
>*/
>   private void logEditsLocally(long firstTxId, int numTxns, byte[] data) {
> long expectedTxId = editLog.getLastWrittenTxId() + 1;
> Preconditions.checkState(firstTxId == expectedTxId,
> "received txid batch starting at %s but expected txn %s",
> firstTxId, expectedTxId);
> editLog.setNextTxId(firstTxId + numTxns - 1);
> editLog.logEdit(data.length, data);
> editLog.logSync();
>   }
> {code}
> 
> 
> HDFS-10753
> *MethodInvocation replaced by variable due to toString method*
> Similar to the fix in HADOOP-6419, in file:
> hadoop-rel-release-2.7.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
> in line 76, the blk.getBlockName() method invocation is invoked on variable 
> blk. "blk" is the class instance of Block.
> {code}
> void addToCorruptReplicasMap(Block blk, DatanodeDescriptor dn,
>   String reason, Reason reasonCode) {
> ...
> NameNode.blockStateChangeLog.info(
>   "BLOCK NameSystem.addToCorruptReplicasMap: {} added as corrupt on "
>   + "{} by {} {}", blk.getBlockName(), dn, Server.getRemoteIp(),
>   reasonText);
> {code}
> In file: 
> 

[jira] [Commented] (HDFS-11018) Incorrect check and message in FsDatasetImpl#invalidate

2016-10-19 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590359#comment-15590359
 ] 

Yiqun Lin commented on HDFS-11018:
--

Thanks [~jojochuang] for the review. One point that I want to say, the test 
{{TestDiskspaceQuotaUpdate}} fails in this JIRA. I found the reason is that 
there is some bind exception. I have filed the JIRA HDFS-11025 to make a quick 
fix. We have reach an agreement that back up on different port across 
restarting datanode will not influce the current logic. So I have one proposal 
that you can also make a commit for that after you have looked into for that, 
:). Thanks.

> Incorrect check and message in FsDatasetImpl#invalidate
> ---
>
> Key: HDFS-11018
> URL: https://issues.apache.org/jira/browse/HDFS-11018
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
> Attachments: HDFS-11018.001.patch, HDFS-11018.002.patch, 
> HDFS-11018.003.patch
>
>
> The following error check and message is incorrect, because {{info}} is null 
> if (1) the block id does not exist in ReplicaMap or (2) the generation stamp 
> of block does not match the replica entry in ReplicaMap.
> {code:title=FsDatasetImpl#invalidate}
>final ReplicaInfo info = volumeMap.get(bpid, invalidBlks[i]);
> if (info == null) {
>   // It is okay if the block is not found -- it may be deleted 
> earlier.
>   LOG.info("Failed to delete replica " + invalidBlks[i]
>   + ": ReplicaInfo not found.");
>   continue;
> }
> if (info.getGenerationStamp() != invalidBlks[i].getGenerationStamp()) 
> {
>   errors.add("Failed to delete replica " + invalidBlks[i]
>   + ": GenerationStamp not matched, info=" + info);
>   continue;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11036) Ozone : reuse Xceiver connection

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590349#comment-15590349
 ] 

Hadoop QA commented on HDFS-11036:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 77 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11036 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834280/HDFS-11036-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6f8e8c6351df 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c70775a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17229/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17229/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17229/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17229/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Updated] (HDFS-10752) Several log refactoring/improvement suggestion in HDFS

2016-10-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10752:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed for 2.8.0. Thanks for the contribution [~hanishakoneru]!

> Several log refactoring/improvement suggestion in HDFS
> --
>
> Key: HDFS-10752
> URL: https://issues.apache.org/jira/browse/HDFS-10752
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Nemo Chen
>Assignee: Hanisha Koneru
>  Labels: easyfix, easytest
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10752.000.patch
>
>
> As per conversation with [~vrushalic], we merged HDFS-10749, HDFS-10750, 
> HDFS-10751, HDFS-10753,  under this issue.
> 
> HDFS-10749
> *Method invocation in logs can be replaced by variable*
> Similar to the fix for HDFS-409. In file:
> hadoop-rel-release-2.7.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
> In code block:
> {code:borderStyle=solid}
> lastQueuedSeqno = currentPacket.getSeqno();
> if (DFSClient.LOG.isDebugEnabled()) {
> DFSClient.LOG.debug("Queued packet " + currentPacket.getSeqno());
> }
> {code}
> currentPacket.getSeqno() is better to be replaced by variable lastQueuedSeqno.
> 
> HDFS-10750
> Similar to the fix for AVRO-115. In file:
> hadoop-rel-release-2.7.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
> in line 695, the logging code:
> {code:borderStyle=solid}
> LOG.info(getRole() + " RPC up at: " + rpcServer.getRpcAddress());
> {code}
> In the same class, there is a method in line 907:
> {code:borderStyle=solid}
>   /**
>* @return NameNode RPC address
>*/
>   public InetSocketAddress getNameNodeAddress() {
> return rpcServer.getRpcAddress();
>   }
> {code}
> We can tell that rpcServer.getRpcAddress() could be replaced by  method 
> getNameNodeAddress() for the case of readability and simplicity
> 
> HDFS-10751
> Similar to the fix for AVRO-115. In file:
> hadoop-rel-release-2.7.2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtxCache.java
> in line 72, the logging code:
> {code:borderStyle=solid}
> LOG.trace("openFileMap size:" + openFileMap.size());
> {code}
> In the same class, there is a method in line 189:
> {code:borderStyle=solid}
>   int size() {
> return openFileMap.size();
>   }
> {code}
> We can tell that openFileMap.size() could be replaced by  method size() for 
> the case of readability and simplicity
> 
> *Print variable in byte*
> Similar to the fix for HBASE-623, in file:
> hadoop-rel-release-2.7.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupImage.java
> In the following method, the log printed variable data (in byte[]). A 
> possible fix is add Bytes.toString(data).
> {code}
> /**
>* Write the batch of edits to the local copy of the edit logs.
>*/
>   private void logEditsLocally(long firstTxId, int numTxns, byte[] data) {
> long expectedTxId = editLog.getLastWrittenTxId() + 1;
> Preconditions.checkState(firstTxId == expectedTxId,
> "received txid batch starting at %s but expected txn %s",
> firstTxId, expectedTxId);
> editLog.setNextTxId(firstTxId + numTxns - 1);
> editLog.logEdit(data.length, data);
> editLog.logSync();
>   }
> {code}
> 
> 
> HDFS-10753
> *MethodInvocation replaced by variable due to toString method*
> Similar to the fix in HADOOP-6419, in file:
> hadoop-rel-release-2.7.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/CorruptReplicasMap.java
> in line 76, the blk.getBlockName() method invocation is invoked on variable 
> blk. "blk" is the class instance of Block.
> {code}
> void addToCorruptReplicasMap(Block blk, DatanodeDescriptor dn,
>   String reason, Reason reasonCode) {
> ...
> NameNode.blockStateChangeLog.info(
>   "BLOCK NameSystem.addToCorruptReplicasMap: {} added as corrupt on "
>   + "{} by {} {}", blk.getBlockName(), dn, Server.getRemoteIp(),
>   reasonText);
> {code}
> In file: 
> hadoop-rel-release-2.7.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/Block.java
> {code}
>   @Override
>   public String toString() {
> return getBlockName() + "_" + getGenerationStamp();
>   }
> {code}
> The toString() method contain not only getBlockName() but also 
> getGenerationStamp which may be helpful for debugging purpose. Therefore 
> blk.getBlockName() can be replaced by blk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-11011) Add unit tests for HDFS command 'dfsadmin -set/clrSpaceQuota'

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590323#comment-15590323
 ] 

Hadoop QA commented on HDFS-11011:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 9 new + 38 unchanged - 1 fixed = 47 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11011 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834269/HDFS-11011.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 36f17a36a154 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e9c4616 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17226/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17226/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17226/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17226/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add unit tests for HDFS command 'dfsadmin -set/clrSpaceQuota'
> -
>
> Key: 

[jira] [Commented] (HDFS-9820) Improve distcp to support efficient restore to an earlier snapshot

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590280#comment-15590280
 ] 

Hadoop QA commented on HDFS-9820:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
24 new + 174 unchanged - 12 fixed = 198 total (was 186) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m  
3s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834276/HDFS-9820.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5580be8829de 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e9c4616 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17228/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17228/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17228/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve distcp to support efficient restore to an earlier snapshot
> --
>
> Key: HDFS-9820
> URL: https://issues.apache.org/jira/browse/HDFS-9820
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.6.4
>Reporter: Yongjun Zhang
>

[jira] [Updated] (HDFS-11036) Ozone : reuse Xceiver connection

2016-10-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11036:
--
Attachment: HDFS-11036-HDFS-7240.003.patch

v003 fix the check style warning, the reset of the warnings are all from the 
protobuf-generated class

> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11036-HDFS-7240.001.patch, 
> HDFS-11036-HDFS-7240.002.patch, HDFS-11036-HDFS-7240.003.patch
>
>
> Currently for every IO operation calling into XceiverClientManager will 
> open/close a connection, this JIRA proposes to reuse connection to reduce 
> connection setup/shutdown overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6984) In Hadoop 3, make FileStatus serialize itself via protobuf

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590249#comment-15590249
 ] 

Hadoop QA commented on HDFS-6984:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
6s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-6984 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834247/HDFS-6984.003.patch |
| Optional Tests |  asflicense  xml  compile  javac  javadoc  mvninstall  
mvnsite  unit  findbugs  checkstyle  cc  |
| uname | Linux a100249d6bf2 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e9c4616 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Commented] (HDFS-10885) [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier is on

2016-10-19 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590227#comment-15590227
 ] 

Uma Maheswara Rao G commented on HDFS-10885:


HI [~zhouwei], Thank you for working on this task.

I think depending on config option really does not work here. Because Mover can 
run from any process where config items can be different from Namenode. So, 
mover may have this item disabled in its configs, but at NN it might be true 
and running.
I think this is little tricky to handle, but the following is what idea 
striking in my mind for now.

How about we use mover id file for communicating this. Right now Mover will 
depend on that file. IF file exists, it will not allow other mover to run. So, 
we may need to treat that as reserved path in NN and use that file inode 
existence? When SPS running it can set XAttr on that file to indicate SPS 
running. So, that when file already exist and if Xattr says SPS, Mover can log 
the same info to let user know about that. This is just an initial thought. 
Other suggestions are most welcomed.



> [SPS]: Mover tool should not be allowed to run when Storage Policy Satisfier 
> is on
> --
>
> Key: HDFS-10885
> URL: https://issues.apache.org/jira/browse/HDFS-10885
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Wei Zhou
>Assignee: Wei Zhou
> Fix For: HDFS-10285
>
> Attachments: HDFS-10800-HDFS-10885-00.patch, 
> HDFS-10800-HDFS-10885-01.patch, HDFS-10800-HDFS-10885-02.patch, 
> HDFS-10885-HDFS-10285.03.patch, HDFS-10885-HDFS-10285.04.patch
>
>
> These two can not work at the same time to avoid conflicts and fight with 
> each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11036) Ozone : reuse Xceiver connection

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590220#comment-15590220
 ] 

Hadoop QA commented on HDFS-11036:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
34s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 77 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11036 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834270/HDFS-11036-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 75a33dcb929c 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c70775a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17227/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17227/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17227/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17227/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Commented] (HDFS-11027) libhdfs++: Don't retry if there is an authentication failure

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590216#comment-15590216
 ] 

Hadoop QA commented on HDFS-11027:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
37s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
0s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
59s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
55s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
55s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 46s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_111 Failed CTEST tests | 
test_libhdfs_threaded_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:78fc6b6 |
| JIRA Issue | HDFS-11027 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834261/HDFS-11027.HDFS-8707.000.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux c2b0452a8f27 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 4d33cb5 |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17225/artifact/patchprocess/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_111-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17225/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_111.txt
 |
| JDK v1.7.0_111  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17225/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17225/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Updated] (HDFS-9820) Improve distcp to support efficient restore to an earlier snapshot

2016-10-19 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9820:

Attachment: HDFS-9820.009.patch

Thanks [~andrew.wang].

New rev 009 to fix the comments. 


> Improve distcp to support efficient restore to an earlier snapshot
> --
>
> Key: HDFS-9820
> URL: https://issues.apache.org/jira/browse/HDFS-9820
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.6.4
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-9820.001.patch, HDFS-9820.002.patch, 
> HDFS-9820.003.patch, HDFS-9820.004.patch, HDFS-9820.005.patch, 
> HDFS-9820.006.patch, HDFS-9820.007.patch, HDFS-9820.008.patch, 
> HDFS-9820.009.patch, HDFS-9820.branch-2.patch
>
>
> A common use scenario (scenaio 1): 
> # create snapshot sx in clusterX, 
> # do some experiemnts in clusterX, which creates some files. 
> # throw away the files changed and go back to sx.
> Another scenario (scenario 2) is, there is a production cluster and a backup 
> cluster, we periodically sync up the data from production cluster to the 
> backup cluster with distcp. 
> The cluster in scenario 1 could be the backup cluster in scenario 2.
> For scenario 1:
> HDFS-4167 intends to restore HDFS to the most recent snapshot, and there are 
> some complexity and challenges.  Before that jira is implemented, we count on 
> distcp to copy from snapshot to the current state. However, the performance 
> of this operation could be very bad because we have to go through all files 
> even if we only changed a few files.
> For scenario 2:
> HDFS-7535 improved distcp performance by avoiding copying files that changed 
> name since last backup.
> On top of HDFS-7535, HDFS-8828 improved distcp performance when copying data 
> from source to target cluster, by only copying changed files since last 
> backup. The way it works is use snapshot diff to find out all files changed, 
> and copy the changed files only.
> See 
> https://blog.cloudera.com/blog/2015/12/distcp-performance-improvements-in-apache-hadoop/
> This jira is to propose a variation of HDFS-8828, to find out the files 
> changed in target cluster since last snapshot sx, and copy these from 
> snapshot sx of either the source or the target cluster, to restore target 
> cluster's current state to sx. 
> Specifically,
> If a file/dir is
> - renamed, rename it back
> - created in target cluster, delete it
> - modified, put it to the copy list
> - run distcp with the copy list, copy from the source cluster's corresponding 
> snapshot
> This could be a new command line switch -rdiff in distcp.
> As a native restore feature, HDFS-4167 would still be ideal to have. However, 
>  HDFS-9820 would hopefully be easier to implement, before HDFS-4167 is in 
> place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10943) rollEditLog expects empty EditsDoubleBuffer.bufCurrent which is not guaranteed

2016-10-19 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590210#comment-15590210
 ] 

Yongjun Zhang commented on HDFS-10943:
--

Thanks a lot [~daryn]!

Sorry a follow-up question at the end with this comment:

About

Essentially the roll needs to be an atomic: 
# log segment end, 
# close segment, 
# open new segment, 
# log segment start. 

Step #1 and #2 above should have flushed all edits in the buffer before closing 
the segment, but seems we were not doing that in 
{{FSEditLog#endCurrentSegment}} before HDFS-7964 added the following call:
{code}
// always sync to ensure all edits are flushed.
logSyncAll();  <
{code}

My problem is that some edits are not flushed to JN when the editlog is being 
rolled, and the release that has this problem doesn't have the fix of 
HDFS-7964.  Though HDFS-7964 is reported as an enhancement to support async 
edit logging, as a side product,  it also likely fixes my problem. 

{quote}
The answer is track down why the fsn lock is not being held.
{quote}
Are you saying that if we were not to use HDFS-7964 fix,  then the fsn lock 
should have been held, so we don't need to call  {{logSyncAll()}} in 
{{FSEditLog#endCurrentSegment}}. That is, you suspect there is some place that 
the fsn lock was not held (as it's supposed to) rather than the missing 
{{logSyncAll()}} call?

Thanks.


> rollEditLog expects empty EditsDoubleBuffer.bufCurrent which is not guaranteed
> --
>
> Key: HDFS-10943
> URL: https://issues.apache.org/jira/browse/HDFS-10943
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>
> Per the following trace stack:
> {code}
> FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: finalize log 
> segment 10562075963, 10562174157 failed for required journal 
> (JournalAndStream(mgr=QJM to [0.0.0.1:8485, 0.0.0.2:8485, 0.0.0.3:8485, 
> 0.0.0.4:8485, 0.0.0.5:8485], stream=QuorumOutputStream starting at txid 
> 10562075963))
> java.io.IOException: FSEditStream has 49708 bytes still to be flushed and 
> cannot be closed.
> at 
> org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer.close(EditsDoubleBuffer.java:66)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.close(QuorumOutputStream.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.closeStream(JournalSet.java:115)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$4.apply(JournalSet.java:235)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.finalizeLogSegment(JournalSet.java:231)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1243)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1243)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:6437)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1002)
> at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:142)
> at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12025)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> 2016-09-23 21:40:59,618 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Aborting 
> QuorumOutputStream starting at txid 10562075963
> {code}
> The exception is from  EditsDoubleBuffer
> {code}
>  public void close() throws IOException {
> Preconditions.checkNotNull(bufCurrent);
> Preconditions.checkNotNull(bufReady);
> int bufSize = bufCurrent.size();
> if (bufSize != 0) {
>   throw new IOException("FSEditStream has " + bufSize
>   + " bytes still to be flushed and cannot be closed.");
> }
> IOUtils.cleanup(null, 

[jira] [Updated] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer

2016-10-19 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11038:
-
Fix Version/s: (was: 2.9.0)

> DiskBalancer: support running multiple commands under one setup of disk 
> balancer
> 
>
> Key: HDFS-11038
> URL: https://issues.apache.org/jira/browse/HDFS-11038
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Disk balancer follows/reuses one rule designed by HDFS balancer, that is, 
> only one instance is allowed to run at the same time. This is correct in 
> production system to avoid any inconsistencies, but it's not ideal to write 
> and run unit tests. For example, it should be allowed run plan, execute, scan 
> commands under one setup of disk balancer. One instance rule will throw 
> exception by complaining 'Another instance is running'. In such a case, 
> there's no way to do a full life cycle tests which involves a sequence of 
> commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11037) DiskBalancer: redirect stdout/stderr stream for easy tests

2016-10-19 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11037:
-
Fix Version/s: (was: 2.9.0)

> DiskBalancer: redirect stdout/stderr stream for easy tests
> --
>
> Key: HDFS-11037
> URL: https://issues.apache.org/jira/browse/HDFS-11037
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> Currently, disk balancer command maintains PrintStream so that it makes it 
> easy to buffer any outputs for test verification. This is not clean approach 
> as we might also need to adding other print streams for inputs, outputs and 
> errors. The better way is to use System.setErr() or System.setIn(), 
> System.setOut() to do stream redirection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11019) Inconsistent number of corrupt replicas if a corrupt replica is reported multiple times

2016-10-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590180#comment-15590180
 ] 

Wei-Chiu Chuang commented on HDFS-11019:


Thanks [~kshukla] for that information! It does look like HDFS-9958 is in play 
here. Looks like you fixed two bugs with one patch! Unfortunately we have it 
backported in CDH5.7.4 but not in CDH5.7.2.

> Inconsistent number of corrupt replicas if a corrupt replica is reported 
> multiple times
> ---
>
> Key: HDFS-11019
> URL: https://issues.apache.org/jira/browse/HDFS-11019
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: CDH5.7.2 
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-11019.test.patch
>
>
> While investigating a block corruption issue, I found the following warning 
> message in the namenode log:
> {noformat}
> (a client reports a block replica is corrupt)
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1073803461 added as corrupt on 
> 10.0.0.63:50010 by /10.0.0.62  because client machine reported it
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK* invalidateBlock: 
> blk_1073803461_74513(stored=blk_1073803461_74553) on 10.0.0.63:50010
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK* InvalidateBlocks: add 
> blk_1073803461_74513 to 10.0.0.63:50010
> (another client reports a block replica is corrupt)
> 2016-10-12 10:07:37,728 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1073803461 added as corrupt on 
> 10.0.0.63:50010 by /10.0.0.64  because client machine reported it
> 2016-10-12 10:07:37,728 INFO BlockStateChange: BLOCK* invalidateBlock: 
> blk_1073803461_74513(stored=blk_1073803461_74553) on 10.0.0.63:50010
> (ReplicationMonitor thread kicks in to invalidate the replica and add a new 
> one)
> 2016-10-12 10:07:37,888 INFO BlockStateChange: BLOCK* ask 10.0.0.56:50010 to 
> replicate blk_1073803461_74553 to datanode(s) 10.0.0.63:50010
> 2016-10-12 10:07:37,888 INFO BlockStateChange: BLOCK* BlockManager: ask 
> 10.0.0.63:50010 to delete [blk_1073803461_74513]
> (the two maps are inconsistent)
> 2016-10-12 10:08:00,335 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Inconsistent 
> number of corrupt replicas for blk_1073803461_74553 blockMap has 0 but 
> corrupt replicas map has 1
> {noformat}
> It seems that when a corrupt block replica is reported twice, blockMap 
> corrupt and corrupt replica map becomes inconsistent.
> Looking at the log, I suspect the bug is in 
> {{BlockManager#removeStoredBlock}}. When a corrupt replica is reported, 
> BlockManager removes the block from blocksMap. If the block is already 
> removed (that is, the corrupt replica is reported twice), return; Otherwise 
> (that is, the corrupt replica is reported the first time), remove the block 
> from corruptReplicasMap (The block is added into corruptReplicasMap in 
> BlockerManager#markBlockAsCorrupt) Therefore, after the second corruption 
> report, the corrupt replica is removed from blocksMap, but the one in 
> corruptReplicasMap is not removed.
> I can’t tell what’s the impact that they are inconsistent. But I feel it's a 
> good idea to fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11036) Ozone : reuse Xceiver connection

2016-10-19 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590168#comment-15590168
 ] 

Chen Liang commented on HDFS-11036:
---

[~cnauroth] do you mind taking a look at the patch when you have time? thanks!

> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11036-HDFS-7240.001.patch, 
> HDFS-11036-HDFS-7240.002.patch
>
>
> Currently for every IO operation calling into XceiverClientManager will 
> open/close a connection, this JIRA proposes to reuse connection to reduce 
> connection setup/shutdown overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11038) DiskBalancer: support running multiple commands under one setup of disk balancer

2016-10-19 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-11038:


 Summary: DiskBalancer: support running multiple commands under one 
setup of disk balancer
 Key: HDFS-11038
 URL: https://issues.apache.org/jira/browse/HDFS-11038
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou


Disk balancer follows/reuses one rule designed by HDFS balancer, that is, only 
one instance is allowed to run at the same time. This is correct in production 
system to avoid any inconsistencies, but it's not ideal to write and run unit 
tests. For example, it should be allowed run plan, execute, scan commands under 
one setup of disk balancer. One instance rule will throw exception by 
complaining 'Another instance is running'. In such a case, there's no way to do 
a full life cycle tests which involves a sequence of commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11036) Ozone : reuse Xceiver connection

2016-10-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11036:
--
Attachment: HDFS-11036-HDFS-7240.002.patch

submit v002 patch to fix findbug and checkstyle warnings.

> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11036-HDFS-7240.001.patch, 
> HDFS-11036-HDFS-7240.002.patch
>
>
> Currently for every IO operation calling into XceiverClientManager will 
> open/close a connection, this JIRA proposes to reuse connection to reduce 
> connection setup/shutdown overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10999) Use more generic "low redundancy" blocks instead of "under replicated" blocks

2016-10-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590122#comment-15590122
 ] 

Andrew Wang commented on HDFS-10999:


[~jojochuang] thanks for sharing that output. Allen mentioned that fsck is used 
as both a quick check, as well as a rough measure of how much recovery work is 
ongoing. Assuming that "Missing internal blocks" goes up when 
"Under-erasure-coded groups" is non-zero, this seems workable.

> Use more generic "low redundancy" blocks instead of "under replicated" blocks
> -
>
> Key: HDFS-10999
> URL: https://issues.apache.org/jira/browse/HDFS-10999
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>  Labels: supportability
>
> Per HDFS-9857, it seems in the Hadoop 3 world, people prefer the more generic 
> term "low redundancy" to the old-fashioned "under replicated". But this term 
> is still being used in messages in several places, such as web ui, dfsadmin 
> and fsck. We should probably change them to avoid confusion.
> File this jira to discuss it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11037) DiskBalancer: redirect stdout/stderr stream for easy tests

2016-10-19 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-11037:


 Summary: DiskBalancer: redirect stdout/stderr stream for easy tests
 Key: HDFS-11037
 URL: https://issues.apache.org/jira/browse/HDFS-11037
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou


Currently, disk balancer command maintains PrintStream so that it makes it easy 
to buffer any outputs for test verification. This is not clean approach as we 
might also need to adding other print streams for inputs, outputs and errors. 
The better way is to use System.setErr() or System.setIn(), System.setOut() to 
do stream redirection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8410) Add computation time metrics to datanode for ECWorker

2016-10-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590103#comment-15590103
 ] 

Andrew Wang commented on HDFS-8410:
---

Sorry I didn't mention this in the previous review, but it'd be mildly better 
to use "final" for the two vars, as well as {{Time.monotonicNow()}} which is 
our wrapper for nanotime. +1 pending this though, thanks Sammi.

> Add computation time metrics to datanode for ECWorker
> -
>
> Key: HDFS-8410
> URL: https://issues.apache.org/jira/browse/HDFS-8410
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8410-001.patch, HDFS-8410-002.patch, 
> HDFS-8410-003.patch, HDFS-8410-004.patch
>
>
> This is a sub task of HDFS-7674. It adds time metric for ec decode work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11036) Ozone : reuse Xceiver connection

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590100#comment-15590100
 ] 

Hadoop QA commented on HDFS-11036:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
38s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 77 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 7 new + 2 unchanged - 0 fixed = 9 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 2 new 
+ 77 unchanged - 0 fixed = 79 total (was 77) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Unread field:field be static?  At XceiverClientManager.java:[line 56] |
|  |  Should 
org.apache.hadoop.scm.XceiverClientManager$XceiverClientWithAccessInfo be a 
_static_ inner class?  At XceiverClientManager.java:inner class?  At 
XceiverClientManager.java:[lines 210-235] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11036 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834263/HDFS-11036-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6309483c10e2 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c70775a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17224/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
 |
| checkstyle | 

[jira] [Updated] (HDFS-11011) Add unit tests for HDFS command 'dfsadmin -set/clrSpaceQuota'

2016-10-19 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11011:
-
Attachment: HDFS-11011.004.patch

The v3 patch looks good to me overall.

{code}
1454new String[] {
1455"It should be one line error message like:"
1456+ " clrSpaceQuota...Access denied for user...",
1457"clrSpaceQuota",
1458"Access denied for user"});
{code}
For var args, we don't need to construct the String[] every time. Just pass as 
many parameters as we wish.

{code:title= runAndVerifyQuota()}
...
1539/* verify outputs */
1540scanIntoList(ioBuf, outs);
1541assertEquals(msgs[0], expectedOutNum, outs.size());
1542if (expectedOutNum > 0) {
1543  assertThat(outs.get(0),
1544  is(allOf(containsString(msgs[1]),
1545  containsString(msgs[2];
1546}
1547  }
{code}
This indicates that if {{expectedOutNum}} is greater than zero, the {{msgs}} 
will always have at least three parameters. This is error-prone to use. 
Meanwhile, the first string aka {{msg\[0\]}} is the assertEquals failing 
message, different from other msgs. Let's split them.

I updated the patch slightly and upload a v4 version. Can you review that, 
[~xiaobingo]? Thanks,

> Add unit tests for HDFS command 'dfsadmin -set/clrSpaceQuota'
> -
>
> Key: HDFS-11011
> URL: https://issues.apache.org/jira/browse/HDFS-11011
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>  Labels: fs, shell, test
> Attachments: HDFS-11011.000.patch, HDFS-11011.001.patch, 
> HDFS-11011.002.patch, HDFS-11011.003.patch, HDFS-11011.004.patch
>
>
> This proposes adding a bunch of unit tests for command  'dfsadmin 
> setSpaceQuota' and  'dfsadmin clrSpaceQuota'.
> 1. test to set space quote using negative number.
> 2. test to set and clear space quote, regular usage.
> 3. test to set and clear space quote by storage type.
> 4. test to set and clear space quote when directory doesn't exist.
> 5. test to set and clear space quote when path is a file.
> 6. test to set and clear space quote when user has no access right.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11018) Incorrect check and message in FsDatasetImpl#invalidate

2016-10-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590094#comment-15590094
 ] 

Wei-Chiu Chuang commented on HDFS-11018:


LGTM +1 on the 003 patch.

> Incorrect check and message in FsDatasetImpl#invalidate
> ---
>
> Key: HDFS-11018
> URL: https://issues.apache.org/jira/browse/HDFS-11018
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Yiqun Lin
> Attachments: HDFS-11018.001.patch, HDFS-11018.002.patch, 
> HDFS-11018.003.patch
>
>
> The following error check and message is incorrect, because {{info}} is null 
> if (1) the block id does not exist in ReplicaMap or (2) the generation stamp 
> of block does not match the replica entry in ReplicaMap.
> {code:title=FsDatasetImpl#invalidate}
>final ReplicaInfo info = volumeMap.get(bpid, invalidBlks[i]);
> if (info == null) {
>   // It is okay if the block is not found -- it may be deleted 
> earlier.
>   LOG.info("Failed to delete replica " + invalidBlks[i]
>   + ": ReplicaInfo not found.");
>   continue;
> }
> if (info.getGenerationStamp() != invalidBlks[i].getGenerationStamp()) 
> {
>   errors.add("Failed to delete replica " + invalidBlks[i]
>   + ": GenerationStamp not matched, info=" + info);
>   continue;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10943) rollEditLog expects empty EditsDoubleBuffer.bufCurrent which is not guaranteed

2016-10-19 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590079#comment-15590079
 ] 

Daryn Sharp commented on HDFS-10943:


Unless something has changed semi-recently, you definitely cannot roll the 
edits w/o fsn synchronization.  Kihwal was right, he's heard me grumble many 
times about the edit logs not really being thread-safe.  I think I filed a jira 
about it many years ago...

The main problem is the complex wait/notify behavior for interleaving edits and 
syncs.  Essentially the roll needs to be an atomic:  log segment end, close 
segment, open new segment, log segment start.  Relinquishing the edit log mutex 
anywhere during that transaction due to wait() may cause "very bad things" to 
happen.  Best case is an NPE when another thread tries to log between segments. 
 The sync won't matter if another spurious edit slips in after the end segment 
edit or before the start segment edit.  Must... bury... memories of scrambling 
to save the namespace of a few clusters after the standby crashed from 
corrupted edits.

The answer is track down why the fsn lock is not being held.  

> rollEditLog expects empty EditsDoubleBuffer.bufCurrent which is not guaranteed
> --
>
> Key: HDFS-10943
> URL: https://issues.apache.org/jira/browse/HDFS-10943
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>
> Per the following trace stack:
> {code}
> FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: finalize log 
> segment 10562075963, 10562174157 failed for required journal 
> (JournalAndStream(mgr=QJM to [0.0.0.1:8485, 0.0.0.2:8485, 0.0.0.3:8485, 
> 0.0.0.4:8485, 0.0.0.5:8485], stream=QuorumOutputStream starting at txid 
> 10562075963))
> java.io.IOException: FSEditStream has 49708 bytes still to be flushed and 
> cannot be closed.
> at 
> org.apache.hadoop.hdfs.server.namenode.EditsDoubleBuffer.close(EditsDoubleBuffer.java:66)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.close(QuorumOutputStream.java:65)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.closeStream(JournalSet.java:115)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$4.apply(JournalSet.java:235)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.finalizeLogSegment(JournalSet.java:231)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1243)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.rollEditLog(FSEditLog.java:1172)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.rollEditLog(FSImage.java:1243)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.rollEditLog(FSNamesystem.java:6437)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rollEditLog(NameNodeRpcServer.java:1002)
> at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.rollEditLog(NamenodeProtocolServerSideTranslatorPB.java:142)
> at 
> org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:12025)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> 2016-09-23 21:40:59,618 WARN 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Aborting 
> QuorumOutputStream starting at txid 10562075963
> {code}
> The exception is from  EditsDoubleBuffer
> {code}
>  public void close() throws IOException {
> Preconditions.checkNotNull(bufCurrent);
> Preconditions.checkNotNull(bufReady);
> int bufSize = bufCurrent.size();
> if (bufSize != 0) {
>   throw new IOException("FSEditStream has " + bufSize
>   + " bytes still to be flushed and cannot be closed.");
> }
> IOUtils.cleanup(null, bufCurrent, bufReady);
> bufCurrent = bufReady = null;
>   }
> {code}
> We can see that FSNamesystem.rollEditLog expects  
> EditsDoubleBuffer.bufCurrent to be empty.
> Edits are recorded via FSEditLog$logSync, which does:
> {code}
>* The data is double-buffered within each 

[jira] [Updated] (HDFS-11027) libhdfs++: Don't retry if there is an authentication failure

2016-10-19 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-11027:
---
Status: Patch Available  (was: Open)

> libhdfs++: Don't retry if there is an authentication failure
> 
>
> Key: HDFS-11027
> URL: https://issues.apache.org/jira/browse/HDFS-11027
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-11027.HDFS-8707.000.patch
>
>
> "Authentication failed" status falls into the general !status.ok() block in 
> the HA retry policy so it will keep attempting to failover.  If the client 
> isn't kerberized, or doesn't have the right ticket it should give up and 
> return a meaningful error message (right now it returns a generic bad 
> connection failure string).
> Wouldn't hurt to check the FixedDelayRetryPolicy to make sure that doesn't 
> also keep attempting to retry in the same way.  I suspect it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11036) Ozone : reuse Xceiver connection

2016-10-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11036:
--
Status: Patch Available  (was: Open)

> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11036-HDFS-7240.001.patch
>
>
> Currently for every IO operation calling into XceiverClientManager will 
> open/close a connection, this JIRA proposes to reuse connection to reduce 
> connection setup/shutdown overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11036) Ozone : reuse Xceiver connection

2016-10-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11036:
--
Description: Currently for every IO operation calling into 
XceiverClientManager will open/close a connection, this JIRA proposes to reuse 
connection to reduce connection setup/shutdown overhead.  (was: Currently for 
every IO operation a connection is opened/closed, this JIRA proposes to reuse 
connection to reduce connection setup/shutdown overhead.)

> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11036-HDFS-7240.001.patch
>
>
> Currently for every IO operation calling into XceiverClientManager will 
> open/close a connection, this JIRA proposes to reuse connection to reduce 
> connection setup/shutdown overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11036) Ozone : reuse Xceiver connection

2016-10-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11036:
--
Description: Currently for every IO operation a connection is 
opened/closed, this JIRA proposes to reuse connection to reduce connection 
setup/shutdown overhead.

> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11036-HDFS-7240.001.patch
>
>
> Currently for every IO operation a connection is opened/closed, this JIRA 
> proposes to reuse connection to reduce connection setup/shutdown overhead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11036) Ozone : reuse Xceiver connection

2016-10-19 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11036:
--
Attachment: HDFS-11036-HDFS-7240.001.patch

> Ozone : reuse Xceiver connection
> 
>
> Key: HDFS-11036
> URL: https://issues.apache.org/jira/browse/HDFS-11036
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11036-HDFS-7240.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11036) Ozone : reuse Xceiver connection

2016-10-19 Thread Chen Liang (JIRA)
Chen Liang created HDFS-11036:
-

 Summary: Ozone : reuse Xceiver connection
 Key: HDFS-11036
 URL: https://issues.apache.org/jira/browse/HDFS-11036
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9820) Improve distcp to support efficient restore to an earlier snapshot

2016-10-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590004#comment-15590004
 ] 

Andrew Wang commented on HDFS-9820:
---

One little nit then, this code block:

{code}
80private void prepareFileListing(Job job) throws Exception {
81  if (inputOptions.shouldUseSnapshotDiff()) {
82DistCpSync distCpSync = new DistCpSync(inputOptions, getConf());
83if (distCpSync.sync()) {
84  createInputFileListingWithDiff(job, distCpSync);
85} else {
86  throw new Exception("DistCp sync failed, input options: "
87  + inputOptions);
88}
89  }
90  
91  // Fallback to default DistCp if without "diff" option or sync 
failed.
92  if (!inputOptions.shouldUseSnapshotDiff()) {
93createInputFileListing(job);
94  }
95}
{code}

I know this was copy pasted, but the comment seems wrong since there is no 
fallback. This could also be structured as a simple if/else for clarity.

+1 pending though, this is a stylistic rather than functional issue. Thanks 
Yongjun!

> Improve distcp to support efficient restore to an earlier snapshot
> --
>
> Key: HDFS-9820
> URL: https://issues.apache.org/jira/browse/HDFS-9820
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Affects Versions: 2.6.4
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-9820.001.patch, HDFS-9820.002.patch, 
> HDFS-9820.003.patch, HDFS-9820.004.patch, HDFS-9820.005.patch, 
> HDFS-9820.006.patch, HDFS-9820.007.patch, HDFS-9820.008.patch, 
> HDFS-9820.branch-2.patch
>
>
> A common use scenario (scenaio 1): 
> # create snapshot sx in clusterX, 
> # do some experiemnts in clusterX, which creates some files. 
> # throw away the files changed and go back to sx.
> Another scenario (scenario 2) is, there is a production cluster and a backup 
> cluster, we periodically sync up the data from production cluster to the 
> backup cluster with distcp. 
> The cluster in scenario 1 could be the backup cluster in scenario 2.
> For scenario 1:
> HDFS-4167 intends to restore HDFS to the most recent snapshot, and there are 
> some complexity and challenges.  Before that jira is implemented, we count on 
> distcp to copy from snapshot to the current state. However, the performance 
> of this operation could be very bad because we have to go through all files 
> even if we only changed a few files.
> For scenario 2:
> HDFS-7535 improved distcp performance by avoiding copying files that changed 
> name since last backup.
> On top of HDFS-7535, HDFS-8828 improved distcp performance when copying data 
> from source to target cluster, by only copying changed files since last 
> backup. The way it works is use snapshot diff to find out all files changed, 
> and copy the changed files only.
> See 
> https://blog.cloudera.com/blog/2015/12/distcp-performance-improvements-in-apache-hadoop/
> This jira is to propose a variation of HDFS-8828, to find out the files 
> changed in target cluster since last snapshot sx, and copy these from 
> snapshot sx of either the source or the target cluster, to restore target 
> cluster's current state to sx. 
> Specifically,
> If a file/dir is
> - renamed, rename it back
> - created in target cluster, delete it
> - modified, put it to the copy list
> - run distcp with the copy list, copy from the source cluster's corresponding 
> snapshot
> This could be a new command line switch -rdiff in distcp.
> As a native restore feature, HDFS-4167 would still be ideal to have. However, 
>  HDFS-9820 would hopefully be easier to implement, before HDFS-4167 is in 
> place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11027) libhdfs++: Don't retry if there is an authentication failure

2016-10-19 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-11027:
---
Attachment: HDFS-11027.HDFS-8707.000.patch

Uploading patch on behalf of [~xiaowei.zhu], can't assign to him either until 
permissions issues are sorted out.

Patch looks good to me, +1.

> libhdfs++: Don't retry if there is an authentication failure
> 
>
> Key: HDFS-11027
> URL: https://issues.apache.org/jira/browse/HDFS-11027
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-11027.HDFS-8707.000.patch
>
>
> "Authentication failed" status falls into the general !status.ok() block in 
> the HA retry policy so it will keep attempting to failover.  If the client 
> isn't kerberized, or doesn't have the right ticket it should give up and 
> return a meaningful error message (right now it returns a generic bad 
> connection failure string).
> Wouldn't hurt to check the FixedDelayRetryPolicy to make sure that doesn't 
> also keep attempting to retry in the same way.  I suspect it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11027) libhdfs++: Don't retry if there is an authentication failure

2016-10-19 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-11027:
---
Summary: libhdfs++: Don't retry if there is an authentication failure  
(was: libhdfs++: Make sure HA retry policy is aware of authentication failures)

> libhdfs++: Don't retry if there is an authentication failure
> 
>
> Key: HDFS-11027
> URL: https://issues.apache.org/jira/browse/HDFS-11027
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>
> "Authentication failed" status falls into the general !status.ok() block in 
> the HA retry policy so it will keep attempting to failover.  If the client 
> isn't kerberized, or doesn't have the right ticket it should give up and 
> return a meaningful error message (right now it returns a generic bad 
> connection failure string).
> Wouldn't hurt to check the FixedDelayRetryPolicy to make sure that doesn't 
> also keep attempting to retry in the same way.  I suspect it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10976) Fsck should mark EC files explicitly

2016-10-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589977#comment-15589977
 ] 

Wei-Chiu Chuang commented on HDFS-10976:


Thanks [~andrew.wang]. The failed tests does not fail in my local tree. I'll 
commit v02 patch.

> Fsck should mark EC files explicitly
> 
>
> Key: HDFS-10976
> URL: https://issues.apache.org/jira/browse/HDFS-10976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HDFS-10976.001.patch, HDFS-10976.002.patch
>
>
> Currently fsck reports corrupt EC files as follows: it does not distinguish 
> erasure coded files versus replicated files. In addition, it would be nice to 
> print out the ec policy of the corrupt EC file.
> {quote}
> /striped 
> /striped/corrupted 393216 bytes, 1 block(s): 
> /striped/corrupted: CORRUPT blockpool BP-1564681138-127.0.0.1-1475793860787 
> block blk_-9223372036854775792
>  Under replicated 
> BP-1564681138-127.0.0.1-1475793860787:blk_-9223372036854775792_1001. Target 
> Replicas is 9 but found 5 live replica(s), 0 decommissioned replica(s) and 0 
> decommissioning replica(s).
>  CORRUPT 1 blocks of total size 393216 B
> 0. BP-1564681138-127.0.0.1-1475793860787:blk_-9223372036854775792_1001 
> len=393216 Live_repl=5  
> [DatanodeInfoWithStorage[127.0.0.1:62192,DS-81dcbc38-755e-446e-a028-71a79e4de6d9,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62180,DS-98fe193d-6342-4b2c-ad61-4586e2530b1e,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62167,DS-53031f88-0c63-4839-ab18-efea2f1bb063,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62162,DS-e8b418fd-165d-4d6f-886b-f75c21be096d,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62176,DS-03e51584-5b33-4bb6-89b5-f519cda57429,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62158,DS-ce9ca7b3-5b00-4351-8537-822eed532b46,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62184,DS-668c500e-eb3d-4d40-b900-814076d5e160,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62171,DS-763a3961-b214-4601-81c0-abdaecf539c4,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62188,DS-d4ea6399-bead-452e-8dd5-bc8c5ebd4f45,DISK](LIVE)]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9952) Expose FSNamesystem lock wait time as metrics

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589976#comment-15589976
 ] 

Hadoop QA commented on HDFS-9952:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-9952 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-9952 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814692/HDFS-9952-05.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17223/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expose FSNamesystem lock wait time as metrics
> -
>
> Key: HDFS-9952
> URL: https://issues.apache.org/jira/browse/HDFS-9952
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9952-01.patch, HDFS-9952-02.patch, 
> HDFS-9952-03.patch, HDFS-9952-04.patch, HDFS-9952-05.patch
>
>
> Expose FSNameSystem's readlock() and writeLock() wait time as metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9952) Expose FSNamesystem lock wait time as metrics

2016-10-19 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589969#comment-15589969
 ] 

Zhe Zhang commented on HDFS-9952:
-

Just noticed this work. It's related to the FSN lock metrics work [~xkrogen] is 
currently working on.

> Expose FSNamesystem lock wait time as metrics
> -
>
> Key: HDFS-9952
> URL: https://issues.apache.org/jira/browse/HDFS-9952
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-9952-01.patch, HDFS-9952-02.patch, 
> HDFS-9952-03.patch, HDFS-9952-04.patch, HDFS-9952-05.patch
>
>
> Expose FSNameSystem's readlock() and writeLock() wait time as metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-10-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589917#comment-15589917
 ] 

Andrew Wang commented on HDFS-10872:


Thanks for the replies, Erik, Zhe, inline:

bq. Are we aware of any production usage of this library?

The popular codahale/Dropwizard metrics library pulls in LongAdder in the 
manner I suggested, so it might be easiest to add that as a dependency. I'm 
pretty sure we already have it floating around in Hadoop somewhere. The quality 
of JDK code is also well regarded.

bq. For something like MutableCounter a LongAdder would be great, but I am 
wondering if we can make it apply here?

Looking at [EWMA in 
codahale|https://github.com/dropwizard/metrics/blob/3.2-development/metrics-core/src/main/java/com/codahale/metrics/EWMA.java],
 they reset a LongAdder without additional synchronization with updates. 
Metrics don't need to be super precise, so as long as there isn't a disastrous 
concurrent failure mode, this is okay.

Unless we really need the additional bits of MutableRate, the simplest thing 
might be to wrap the Dropwizard metrics classes like Histogram and EWMA to fit 
in the Hadoop metrics system.

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6984) In Hadoop 3, make FileStatus serialize itself via protobuf

2016-10-19 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-6984:

Attachment: HDFS-6984.003.patch

Rebased [~cmccabe]'s patch.
* Changed {{FileStatus}} to be {{Serializable}}, per [~ste...@apache.org]'s 
suggestion. This cascaded to a few other classes, which I halted at 
{{HdfsBlockLocation}} (changing the final ref to transient). Looking through 
its usage this is probably correct, since the fields not redundant with 
{{BlockLocation}} are things like tokens, which are internal(?) to DFSClient.
* Changed required fields to {{}} from {{}}. mtime in particular isn't always cheap 
on some systems, and the owner/group/perms may be -lies- placeholders if the FS 
is required to populate them. The {{filetype}} is arguable. YARN overwhelmingly 
favors {{optional}} fields for everything, FWIW.
* Changed field IDs to match {{HdfsFileStatusProto}}. In proto2 at least, this 
cross-serialization works (added a test). In HDFS-7878, {{FileStatus}} can 
leave its {{PathHandle}} as {{bytes}}, provided they occupy the same field ID.

> In Hadoop 3, make FileStatus serialize itself via protobuf
> --
>
> Key: HDFS-6984
> URL: https://issues.apache.org/jira/browse/HDFS-6984
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6984.001.patch, HDFS-6984.002.patch, 
> HDFS-6984.003.patch
>
>
> FileStatus was a Writable in Hadoop 2 and earlier.  Originally, we used this 
> to serialize it and send it over the wire.  But in Hadoop 2 and later, we 
> have the protobuf {{HdfsFileStatusProto}} which serves to serialize this 
> information.  The protobuf form is preferable, since it allows us to add new 
> fields in a backwards-compatible way.  Another issue is that already a lot of 
> subclasses of FileStatus don't override the Writable methods of the 
> superclass, breaking the interface contract that read(status.write) should be 
> equal to the original status.
> In Hadoop 3, we should just make FileStatus serialize itself via protobuf so 
> that we don't have to deal with these issues.  It's probably too late to do 
> this in Hadoop 2, since user code may be relying on the existing FileStatus 
> serialization there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10976) Fsck should mark EC files explicitly

2016-10-19 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589817#comment-15589817
 ] 

Andrew Wang commented on HDFS-10976:


LGTM +1

> Fsck should mark EC files explicitly
> 
>
> Key: HDFS-10976
> URL: https://issues.apache.org/jira/browse/HDFS-10976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HDFS-10976.001.patch, HDFS-10976.002.patch
>
>
> Currently fsck reports corrupt EC files as follows: it does not distinguish 
> erasure coded files versus replicated files. In addition, it would be nice to 
> print out the ec policy of the corrupt EC file.
> {quote}
> /striped 
> /striped/corrupted 393216 bytes, 1 block(s): 
> /striped/corrupted: CORRUPT blockpool BP-1564681138-127.0.0.1-1475793860787 
> block blk_-9223372036854775792
>  Under replicated 
> BP-1564681138-127.0.0.1-1475793860787:blk_-9223372036854775792_1001. Target 
> Replicas is 9 but found 5 live replica(s), 0 decommissioned replica(s) and 0 
> decommissioning replica(s).
>  CORRUPT 1 blocks of total size 393216 B
> 0. BP-1564681138-127.0.0.1-1475793860787:blk_-9223372036854775792_1001 
> len=393216 Live_repl=5  
> [DatanodeInfoWithStorage[127.0.0.1:62192,DS-81dcbc38-755e-446e-a028-71a79e4de6d9,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62180,DS-98fe193d-6342-4b2c-ad61-4586e2530b1e,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62167,DS-53031f88-0c63-4839-ab18-efea2f1bb063,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62162,DS-e8b418fd-165d-4d6f-886b-f75c21be096d,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62176,DS-03e51584-5b33-4bb6-89b5-f519cda57429,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62158,DS-ce9ca7b3-5b00-4351-8537-822eed532b46,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62184,DS-668c500e-eb3d-4d40-b900-814076d5e160,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62171,DS-763a3961-b214-4601-81c0-abdaecf539c4,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62188,DS-d4ea6399-bead-452e-8dd5-bc8c5ebd4f45,DISK](LIVE)]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10976) Fsck should mark EC files explicitly

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589785#comment-15589785
 ] 

Hadoop QA commented on HDFS-10976:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m  2s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
|   | hadoop.hdfs.TestPersistBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10976 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834222/HDFS-10976.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6f500f1e8f33 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e9c4616 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17221/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17221/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17221/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fsck should mark EC files explicitly
> 
>
> Key: HDFS-10976
> URL: https://issues.apache.org/jira/browse/HDFS-10976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>

[jira] [Commented] (HDFS-10997) Reduce number of path resolving methods

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589735#comment-15589735
 ] 

Hadoop QA commented on HDFS-10997:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 16 new + 1360 unchanged - 11 fixed = 1376 total (was 1371) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
|   | hadoop.hdfs.TestFileCreationDelete |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10997 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834214/HDFS-10997.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6d00779c80c8 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e9c4616 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17220/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17220/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17220/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17220/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce number of path resolving methods
> ---
>
> Key: HDFS-10997
> 

[jira] [Updated] (HDFS-11027) libhdfs++: Make sure HA retry policy is aware of authentication failures

2016-10-19 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-11027:
---
Assignee: James Clampffer

> libhdfs++: Make sure HA retry policy is aware of authentication failures
> 
>
> Key: HDFS-11027
> URL: https://issues.apache.org/jira/browse/HDFS-11027
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>
> "Authentication failed" status falls into the general !status.ok() block in 
> the HA retry policy so it will keep attempting to failover.  If the client 
> isn't kerberized, or doesn't have the right ticket it should give up and 
> return a meaningful error message (right now it returns a generic bad 
> connection failure string).
> Wouldn't hurt to check the FixedDelayRetryPolicy to make sure that doesn't 
> also keep attempting to retry in the same way.  I suspect it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11027) libhdfs++: Make sure HA retry policy is aware of authentication failures

2016-10-19 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-11027:
---
Assignee: (was: James Clampffer)

> libhdfs++: Make sure HA retry policy is aware of authentication failures
> 
>
> Key: HDFS-11027
> URL: https://issues.apache.org/jira/browse/HDFS-11027
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>
> "Authentication failed" status falls into the general !status.ok() block in 
> the HA retry policy so it will keep attempting to failover.  If the client 
> isn't kerberized, or doesn't have the right ticket it should give up and 
> return a meaningful error message (right now it returns a generic bad 
> connection failure string).
> Wouldn't hurt to check the FixedDelayRetryPolicy to make sure that doesn't 
> also keep attempting to retry in the same way.  I suspect it does.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10976) Fsck should mark EC files explicitly

2016-10-19 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10976:
---
Attachment: HDFS-10976.002.patch

Thanks for the review. Attach a v002 patch to address your comment and fix code 
style warning.

> Fsck should mark EC files explicitly
> 
>
> Key: HDFS-10976
> URL: https://issues.apache.org/jira/browse/HDFS-10976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HDFS-10976.001.patch, HDFS-10976.002.patch
>
>
> Currently fsck reports corrupt EC files as follows: it does not distinguish 
> erasure coded files versus replicated files. In addition, it would be nice to 
> print out the ec policy of the corrupt EC file.
> {quote}
> /striped 
> /striped/corrupted 393216 bytes, 1 block(s): 
> /striped/corrupted: CORRUPT blockpool BP-1564681138-127.0.0.1-1475793860787 
> block blk_-9223372036854775792
>  Under replicated 
> BP-1564681138-127.0.0.1-1475793860787:blk_-9223372036854775792_1001. Target 
> Replicas is 9 but found 5 live replica(s), 0 decommissioned replica(s) and 0 
> decommissioning replica(s).
>  CORRUPT 1 blocks of total size 393216 B
> 0. BP-1564681138-127.0.0.1-1475793860787:blk_-9223372036854775792_1001 
> len=393216 Live_repl=5  
> [DatanodeInfoWithStorage[127.0.0.1:62192,DS-81dcbc38-755e-446e-a028-71a79e4de6d9,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62180,DS-98fe193d-6342-4b2c-ad61-4586e2530b1e,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62167,DS-53031f88-0c63-4839-ab18-efea2f1bb063,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62162,DS-e8b418fd-165d-4d6f-886b-f75c21be096d,DISK](CORRUPT),
>  
> DatanodeInfoWithStorage[127.0.0.1:62176,DS-03e51584-5b33-4bb6-89b5-f519cda57429,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62158,DS-ce9ca7b3-5b00-4351-8537-822eed532b46,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62184,DS-668c500e-eb3d-4d40-b900-814076d5e160,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62171,DS-763a3961-b214-4601-81c0-abdaecf539c4,DISK](LIVE),
>  
> DatanodeInfoWithStorage[127.0.0.1:62188,DS-d4ea6399-bead-452e-8dd5-bc8c5ebd4f45,DISK](LIVE)]
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10975) fsck -list-corruptfileblocks does not report corrupt EC files

2016-10-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589514#comment-15589514
 ] 

Wei-Chiu Chuang commented on HDFS-10975:


Thanks very much for the explanation. But I do find it counter-intuitive.
Also, for verification, I ran the test case {{testFsckCorruptECFile}}.
As you can see below, in this case the EC block is also unrecoverable, but it 
says missing 4 internal blocks.
{noformat}
Erasure Coded Block Groups:
 Total size:393216 B
 Total files:   1
 Total block groups (validated):1 (avg. block group size 393216 B)
  
  UNRECOVERABLE BLOCK GROUPS:   1 (100.0 %)
  CORRUPT FILES:1
  CORRUPT BLOCK GROUPS: 1
  CORRUPT SIZE: 393216 B
  
 Minimally erasure-coded block groups:  0 (0.0 %)
 Over-erasure-coded block groups:   0 (0.0 %)
 Under-erasure-coded block groups:  1 (100.0 %)
 Unsatisfactory placement block groups: 0 (0.0 %)
 Default ecPolicy:  RS-DEFAULT-6-3-64k
 Average block group size:  5.0
 Missing block groups:  0
 Corrupt block groups:  1
 Missing internal blocks:   4 (44.43 %)
FSCK ended at Wed Oct 19 11:43:07 PDT 2016 in 2 milliseconds
{noformat}

> fsck -list-corruptfileblocks does not report corrupt EC files
> -
>
> Key: HDFS-10975
> URL: https://issues.apache.org/jira/browse/HDFS-10975
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Takanobu Asanuma
> Attachments: HDFS-10975.1.patch
>
>
> HDFS-10826 fix fsck for corrupt EC files if no parameters are specified.
> However, if I change the test case added in HDFS-10826 
> (TestFsck#testFsckCorruptECFile) to run "fsck -list-corruptfileblocks", the 
> same test test failed because fsck reports no corrupt files. 
> Interestingly, if I run "fsck -files -blocks -replicaDetails" then the test 
> passed and shows the corrupt file.
> Need to fix the discrepancy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10872) Add MutableRate metrics for FSNamesystemLock operations

2016-10-19 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589506#comment-15589506
 ] 

Zhe Zhang commented on HDFS-10872:
--

Thanks for the discussion Andrew and Erik.

I agree that moving the multi-threading logic to Metrics classes is a good 
idea. I'm actually pretty curious how that will improve the performance of 
existing RPC metrics collection; it seems pretty expensive. We should probably 
do that as a prerequisite and then rebase this patch.

Using {{LongAdder}} is an interesting idea. [~andrew.wang] This is actually the 
first time I learn about it :) Are we aware of any production usage of this 
library? I took a look at the test coverage; it seems a little weak considering 
the complexity of the main logic. We can probably consider adding some more 
unit tests in Hadoop to cover cases we are most interested in.

I expect that a few minutes after the NN starts, the set of operations (keys) 
in the {{opHoldtimeMetrics}} map is stablized. Afterwards, each entry is 
essentially a counter to be incremented. So on the high level I think we should 
be able to make it work with {{LongAdder}}. Agreed with [~xkrogen] that 
snapshotting/resetting is probably the trickiest part. Creating a new 
{{LongAdder}} in {{snapshot}} sounds reseaonable to me.

> Add MutableRate metrics for FSNamesystemLock operations
> ---
>
> Key: HDFS-10872
> URL: https://issues.apache.org/jira/browse/HDFS-10872
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: FSLockPerf.java, HDFS-10872.000.patch, 
> HDFS-10872.001.patch, HDFS-10872.002.patch, HDFS-10872.003.patch, 
> HDFS-10872.004.patch
>
>
> Add metrics for FSNamesystemLock operations to see, overall, how long each 
> operation is holding the lock for. Use MutableRate metrics for now. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11035) Better documentation for maintenace mode and upgrade domain

2016-10-19 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-11035:
--

 Summary: Better documentation for maintenace mode and upgrade 
domain
 Key: HDFS-11035
 URL: https://issues.apache.org/jira/browse/HDFS-11035
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, documentation
Affects Versions: 2.9.0
Reporter: Wei-Chiu Chuang


HDFS-7541 added upgrade domain and HDFS-7877 added maintenance mode. Existing 
documentation about these two features are scarce and the implementation have 
evolved from the original design doc. Looking at code and Javadoc and I still 
don't quite get how I can get datanodes into maintenance mode/ set up a upgrade 
domain.

File this jira to propose that we write an up-to-date description of these two 
features.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10997) Reduce number of path resolving methods

2016-10-19 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-10997:
---
Attachment: HDFS-10997.2.patch

Updating.

> Reduce number of path resolving methods
> ---
>
> Key: HDFS-10997
> URL: https://issues.apache.org/jira/browse/HDFS-10997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10997.1.patch, HDFS-10997.2.patch, HDFS-10997.patch
>
>
> FSDirectory contains many methods for resolving paths to an IIP and/or inode. 
>  These should be unified into a couple methods that will consistently do the 
> basics of resolving reserved paths, blocking write ops from snapshot paths, 
> verifying ancestors as directories, and throwing if symlinks are encountered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11033) Add documents for native raw erasure coder in XOR codes

2016-10-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589370#comment-15589370
 ] 

Wei-Chiu Chuang commented on HDFS-11033:


Looks mostly good to me. 

bq. For default RS codec, there is also a native implementation which leverages 
Intel ISA-L library to improve the encoding and decoding calculation.
You may want to be more specific by saying "to improve the performance of 
codec."

> Add documents for native raw erasure coder in XOR codes
> ---
>
> Key: HDFS-11033
> URL: https://issues.apache.org/jira/browse/HDFS-11033
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: SammiChen
>Assignee: SammiChen
> Attachments: HDFS-11033-v1.patch
>
>
> Add document for native raw erasure coder in XOR codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11034) Provide a command line tool to clear decommissioned DataNode information from the NameNode without restarting.

2016-10-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589341#comment-15589341
 ] 

Chris Nauroth commented on HDFS-11034:
--

We can add a new dfsadmin command to clear this state.

It's important to note that for some operations workflows, it's valuable to 
retain the decommissioned node information.  If the operator is working on a 
series of decommission/recommission steps, then this information is valuable to 
see which nodes are still remaining in decommissioned state.  That likely means 
that the command line needs to accept an argument for a specific host instead 
of just blindly clearing all decommissioned node information.

Remember to clear from both NameNodes in an HA pair.

> Provide a command line tool to clear decommissioned DataNode information from 
> the NameNode without restarting.
> --
>
> Key: HDFS-11034
> URL: https://issues.apache.org/jira/browse/HDFS-11034
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chris Nauroth
>
> Information about decommissioned DataNodes remains tracked in the NameNode 
> for the entire NameNode process lifetime.  Currently, the only way to clear 
> this information is to restart the NameNode.  This issue proposes to add a 
> way to clear this information online, without requiring a process restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11034) Provide a command line tool to clear decommissioned DataNode information from the NameNode without restarting.

2016-10-19 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-11034:


 Summary: Provide a command line tool to clear decommissioned 
DataNode information from the NameNode without restarting.
 Key: HDFS-11034
 URL: https://issues.apache.org/jira/browse/HDFS-11034
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Chris Nauroth


Information about decommissioned DataNodes remains tracked in the NameNode for 
the entire NameNode process lifetime.  Currently, the only way to clear this 
information is to restart the NameNode.  This issue proposes to add a way to 
clear this information online, without requiring a process restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11019) Inconsistent number of corrupt replicas if a corrupt replica is reported multiple times

2016-10-19 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HDFS-11019:
---
Attachment: HDFS-11019.test.patch

[~jojochuang] Thank you reporting this.

In Hadoop 2.6(CDH 5.7.2) The attached test shows the same behavior as mentioned 
above:
{code}
 INFO  BlockStateChange (CorruptReplicasMap.java:addToCorruptReplicasMap(76)) - 
BLOCK NameSystem.addToCorruptReplicasMap: blk_12345 added as corrupt on 
127.0.0.1:12345 by null because TEST
1. corruptReplicaMap=[127.0.0.1:12345]
2. corruptReplicaMap=null
 INFO  BlockStateChange (CorruptReplicasMap.java:addToCorruptReplicasMap(76)) - 
BLOCK NameSystem.addToCorruptReplicasMap: blk_12345 added as corrupt on 
127.0.0.1:12345 by null because TEST
3. corruptReplicaMap=[127.0.0.1:12345]  //should be null
4. corruptReplicaMap=[127.0.0.1:12345]  //should be null
{code}

This behavior is fixed thru HDFS-9958 and if you run the same test it has the 
following output .
{code}
1. corruptReplicaMap=[127.0.0.1:63829]
2. corruptReplicaMap=null
3. corruptReplicaMap=null
4. corruptReplicaMap=null
{code}
The code change is in BlockManager#findAndMarkBlockAsCorrupt in 2.7.3 and up 
releases.
{code}
if (storage == null) {
  storage = storedBlock.findStorageInfo(node);
}

if (storage == null) {
  blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
  blk, dn);
  return;
}
{code}

Hope this helps.

> Inconsistent number of corrupt replicas if a corrupt replica is reported 
> multiple times
> ---
>
> Key: HDFS-11019
> URL: https://issues.apache.org/jira/browse/HDFS-11019
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: CDH5.7.2 
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-11019.test.patch
>
>
> While investigating a block corruption issue, I found the following warning 
> message in the namenode log:
> {noformat}
> (a client reports a block replica is corrupt)
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1073803461 added as corrupt on 
> 10.0.0.63:50010 by /10.0.0.62  because client machine reported it
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK* invalidateBlock: 
> blk_1073803461_74513(stored=blk_1073803461_74553) on 10.0.0.63:50010
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK* InvalidateBlocks: add 
> blk_1073803461_74513 to 10.0.0.63:50010
> (another client reports a block replica is corrupt)
> 2016-10-12 10:07:37,728 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1073803461 added as corrupt on 
> 10.0.0.63:50010 by /10.0.0.64  because client machine reported it
> 2016-10-12 10:07:37,728 INFO BlockStateChange: BLOCK* invalidateBlock: 
> blk_1073803461_74513(stored=blk_1073803461_74553) on 10.0.0.63:50010
> (ReplicationMonitor thread kicks in to invalidate the replica and add a new 
> one)
> 2016-10-12 10:07:37,888 INFO BlockStateChange: BLOCK* ask 10.0.0.56:50010 to 
> replicate blk_1073803461_74553 to datanode(s) 10.0.0.63:50010
> 2016-10-12 10:07:37,888 INFO BlockStateChange: BLOCK* BlockManager: ask 
> 10.0.0.63:50010 to delete [blk_1073803461_74513]
> (the two maps are inconsistent)
> 2016-10-12 10:08:00,335 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Inconsistent 
> number of corrupt replicas for blk_1073803461_74553 blockMap has 0 but 
> corrupt replicas map has 1
> {noformat}
> It seems that when a corrupt block replica is reported twice, blockMap 
> corrupt and corrupt replica map becomes inconsistent.
> Looking at the log, I suspect the bug is in 
> {{BlockManager#removeStoredBlock}}. When a corrupt replica is reported, 
> BlockManager removes the block from blocksMap. If the block is already 
> removed (that is, the corrupt replica is reported twice), return; Otherwise 
> (that is, the corrupt replica is reported the first time), remove the block 
> from corruptReplicasMap (The block is added into corruptReplicasMap in 
> BlockerManager#markBlockAsCorrupt) Therefore, after the second corruption 
> report, the corrupt replica is removed from blocksMap, but the one in 
> corruptReplicasMap is not removed.
> I can’t tell what’s the impact that they are inconsistent. But I feel it's a 
> good idea to fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11019) Inconsistent number of corrupt replicas if a corrupt replica is reported multiple times

2016-10-19 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589190#comment-15589190
 ] 

Kuhu Shukla edited comment on HDFS-11019 at 10/19/16 4:36 PM:
--

[~jojochuang] Thank you for reporting this.

In Hadoop 2.6(CDH 5.7.2) The attached test shows the same behavior as mentioned 
above:
{code}
 INFO  BlockStateChange (CorruptReplicasMap.java:addToCorruptReplicasMap(76)) - 
BLOCK NameSystem.addToCorruptReplicasMap: blk_12345 added as corrupt on 
127.0.0.1:12345 by null because TEST
1. corruptReplicaMap=[127.0.0.1:12345]
2. corruptReplicaMap=null
 INFO  BlockStateChange (CorruptReplicasMap.java:addToCorruptReplicasMap(76)) - 
BLOCK NameSystem.addToCorruptReplicasMap: blk_12345 added as corrupt on 
127.0.0.1:12345 by null because TEST
3. corruptReplicaMap=[127.0.0.1:12345]  //should be null
4. corruptReplicaMap=[127.0.0.1:12345]  //should be null
{code}

This behavior is fixed thru HDFS-9958 and if you run the same test it has the 
following output .
{code}
1. corruptReplicaMap=[127.0.0.1:63829]
2. corruptReplicaMap=null
3. corruptReplicaMap=null
4. corruptReplicaMap=null
{code}
The code change is in BlockManager#findAndMarkBlockAsCorrupt in 2.7.3 and up 
releases.
{code}
if (storage == null) {
  storage = storedBlock.findStorageInfo(node);
}

if (storage == null) {
  blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
  blk, dn);
  return;
}
{code}

Hope this helps.


was (Author: kshukla):
[~jojochuang] Thank you reporting this.

In Hadoop 2.6(CDH 5.7.2) The attached test shows the same behavior as mentioned 
above:
{code}
 INFO  BlockStateChange (CorruptReplicasMap.java:addToCorruptReplicasMap(76)) - 
BLOCK NameSystem.addToCorruptReplicasMap: blk_12345 added as corrupt on 
127.0.0.1:12345 by null because TEST
1. corruptReplicaMap=[127.0.0.1:12345]
2. corruptReplicaMap=null
 INFO  BlockStateChange (CorruptReplicasMap.java:addToCorruptReplicasMap(76)) - 
BLOCK NameSystem.addToCorruptReplicasMap: blk_12345 added as corrupt on 
127.0.0.1:12345 by null because TEST
3. corruptReplicaMap=[127.0.0.1:12345]  //should be null
4. corruptReplicaMap=[127.0.0.1:12345]  //should be null
{code}

This behavior is fixed thru HDFS-9958 and if you run the same test it has the 
following output .
{code}
1. corruptReplicaMap=[127.0.0.1:63829]
2. corruptReplicaMap=null
3. corruptReplicaMap=null
4. corruptReplicaMap=null
{code}
The code change is in BlockManager#findAndMarkBlockAsCorrupt in 2.7.3 and up 
releases.
{code}
if (storage == null) {
  storage = storedBlock.findStorageInfo(node);
}

if (storage == null) {
  blockLog.debug("BLOCK* findAndMarkBlockAsCorrupt: {} not found on {}",
  blk, dn);
  return;
}
{code}

Hope this helps.

> Inconsistent number of corrupt replicas if a corrupt replica is reported 
> multiple times
> ---
>
> Key: HDFS-11019
> URL: https://issues.apache.org/jira/browse/HDFS-11019
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
> Environment: CDH5.7.2 
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-11019.test.patch
>
>
> While investigating a block corruption issue, I found the following warning 
> message in the namenode log:
> {noformat}
> (a client reports a block replica is corrupt)
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1073803461 added as corrupt on 
> 10.0.0.63:50010 by /10.0.0.62  because client machine reported it
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK* invalidateBlock: 
> blk_1073803461_74513(stored=blk_1073803461_74553) on 10.0.0.63:50010
> 2016-10-12 10:07:37,166 INFO BlockStateChange: BLOCK* InvalidateBlocks: add 
> blk_1073803461_74513 to 10.0.0.63:50010
> (another client reports a block replica is corrupt)
> 2016-10-12 10:07:37,728 INFO BlockStateChange: BLOCK 
> NameSystem.addToCorruptReplicasMap: blk_1073803461 added as corrupt on 
> 10.0.0.63:50010 by /10.0.0.64  because client machine reported it
> 2016-10-12 10:07:37,728 INFO BlockStateChange: BLOCK* invalidateBlock: 
> blk_1073803461_74513(stored=blk_1073803461_74553) on 10.0.0.63:50010
> (ReplicationMonitor thread kicks in to invalidate the replica and add a new 
> one)
> 2016-10-12 10:07:37,888 INFO BlockStateChange: BLOCK* ask 10.0.0.56:50010 to 
> replicate blk_1073803461_74553 to datanode(s) 10.0.0.63:50010
> 2016-10-12 10:07:37,888 INFO BlockStateChange: BLOCK* BlockManager: ask 
> 10.0.0.63:50010 to delete [blk_1073803461_74513]
> (the two maps are inconsistent)
> 2016-10-12 10:08:00,335 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Inconsistent 
> number of corrupt replicas for blk_1073803461_74553 blockMap has 0 but 
> 

[jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2016-10-19 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589177#comment-15589177
 ] 

Daryn Sharp commented on HDFS-11026:


IMHO, a major upgrade shouldn't be a sufficient reason to introduce avoidable 
protocol incompatibilities.  It'll guarantee that 3.x is DOA for large 
production deployments.

Yarn isn't exactly a good counter-example.  As you point out, yarn made the 
change to support RU so the cost/benefit was justified.  What is the compelling 
case that breaks the crucial feature that justified the yarn incompatibility?

The main difference between hdfs and yarn is scope of impact.  The impact from 
the yarn incompatibility was confined to the cluster being upgraded.  This hdfs 
incompatibility would impact clients on other clusters.  Imagine trying to get 
the stars to align with customers across many clusters to give a green light 
for their SLAs possibly being impacted.  Not once, but every time you upgrade 
dozens of clusters.

In the end, the DN will have to support dual decoding for inter-cluster 
clients, super long running clients holding old tokens, the balancer, etc.  If 
we go forward with this change, I'd like/prefer to see a designated 2.x 
"minimum release" before a 3.x upgrade.  That designated release would add the 
latent support for PB tokens.

> Convert BlockTokenIdentifier to use Protobuf
> 
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ewan Higgs
> Fix For: 3.0.0-alpha2
>
> Attachments: blocktokenidentifier-protobuf.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
> (basically a {{byte[]}}) and manual serialization to get data into and out of 
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded 
> more easily and will be consistent with the rest of the system.
> NB: Release of this will require a version update since 2.8.x won't be able 
> to decipher {{BlockKeyProto.keyBytes}} from 2.8.y.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2016-10-19 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15589050#comment-15589050
 ] 

Ewan Higgs commented on HDFS-11026:
---

Hi Daryn,

Thanks for taking a look. The patch targets Hadoop 3.0 which hasn't been 
released yet and I was under the impression that rolling from 2.x to 3.0 isn't 
supported (hence the major version number change). Also, the change to make 
Token Identifiers in Yarn
use protobuf instead of {{WritableUtils}} (YARN-668) was done with no gating. 
Those changes were done in order to support rolling upgrades in the first place 
(YARN-666):

https://github.com/apache/hadoop/commit/5391919b09ce9549d13c897aa89bb0a0536760fe

That aside, if we want to support both payload formats then I propose a config 
option ({{DFS_ACCESS_TOKEN_ENABLE_PROTOBUF = 
"dfs.block.access.token.protobuf.enable"}}) which is turned off on default. 
Then the NN sends old or new style payloads based on this. It should be turned 
off until all datanodes are updated, as which point the config can be changed 
and the NN starts sending new protobuf style payloads. 

I'm not confident that the datanode will be able to 100% detect whether it's 
looking at a protobuf message or an old style message (which is overly 
flexible). So, we can put a boolean in {{BlockKeyProto}} that describes the 
version information for the payload.

> Convert BlockTokenIdentifier to use Protobuf
> 
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ewan Higgs
> Fix For: 3.0.0-alpha2
>
> Attachments: blocktokenidentifier-protobuf.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
> (basically a {{byte[]}}) and manual serialization to get data into and out of 
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded 
> more easily and will be consistent with the rest of the system.
> NB: Release of this will require a version update since 2.8.x won't be able 
> to decipher {{BlockKeyProto.keyBytes}} from 2.8.y.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10905) Refactor DataStreamer#createBlockOutputStream

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588936#comment-15588936
 ] 

Hadoop QA commented on HDFS-10905:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10905 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834174/HDFS-10905.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5fa0ab9b797d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c5573e6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17219/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17219/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor DataStreamer#createBlockOutputStream
> -
>
> Key: HDFS-10905
> URL: https://issues.apache.org/jira/browse/HDFS-10905
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HDFS-10905.001.patch, 

[jira] [Comment Edited] (HDFS-9096) Issue in Rollback (after rolling upgrade) from hadoop 2.7.1 to 2.4.0

2016-10-19 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588854#comment-15588854
 ] 

Kihwal Lee edited comment on HDFS-9096 at 10/19/16 2:05 PM:


Were you issuing rollback with 2.7.2?


was (Author: kihwal):
Were you issuing rollback with 2.7.1?

> Issue in Rollback (after rolling upgrade) from hadoop 2.7.1 to 2.4.0
> 
>
> Key: HDFS-9096
> URL: https://issues.apache.org/jira/browse/HDFS-9096
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.4.0
>Reporter: Harpreet Kaur
>
> I tried to do rolling upgrade from hadoop 2.4.0 to hadoop 2.7.1. As per 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade
>  one can rollback to previous release provided the finalise step is not done. 
> I upgraded the setup but didnot finalise the upgrade and tried to rollback 
> HDFS to 2.4.0
> I tried the following steps
>   1.  Shutdown all NNs and DNs.
>   2.  Restore the pre-upgrade release in all machines.
>   3.  Start NN1 as Active with the "-rollingUpgrade 
> rollback"
>  option.
> I am getting the following error after 3rd step
> 15/09/01 17:53:35 INFO namenode.AclConfigFlag: ACLs enabled? false
> 15/09/01 17:53:35 INFO common.Storage: Lock on <>/in_use.lock 
> acquired by nodename 12152@VM-2
> 15/09/01 17:53:35 WARN namenode.FSNamesystem: Encountered exception loading 
> fsimage
> org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected 
> version of storage directory /data/yarn/namenode. Reported: -63. Expecting = 
> -56.
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageInfo.java:178)
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(StorageInfo.java:131)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(NNStorage.java:608)
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInfo.java:228)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:309)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:639)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:455)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:511)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1304)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1370)
> 15/09/01 17:53:35 INFO mortbay.log: Stopped 
> SelectChannelConnector@0.0.0.0:50070
> 15/09/01 17:53:35 INFO impl.MetricsSystemImpl: Stopping NameNode metrics 
> system...
> 15/09/01 17:53:35 INFO impl.MetricsSystemImpl: NameNode metrics system 
> stopped.
> 15/09/01 17:53:35 INFO impl.MetricsSystemImpl: NameNode metrics system 
> shutdown complete.
> 15/09/01 17:53:35 FATAL namenode.NameNode: Exception in namenode join
> From rolling upgrade documentation it can be inferred that rolling upgrade is 
> supported Hadoop 2.4.0 onwards but rollingUpgrade rollback to Hadoop 2.4.0 
> seems to be broken in Hadoop 2.4.0. It throws above mentioned error.
> Are there any other steps to perform rollback (from rolling upgrade) or is it 
> not supported to rollback to Hadoop 2.4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10905) Refactor DataStreamer#createBlockOutputStream

2016-10-19 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10905:
--
Attachment: HDFS-10905.002.patch

uploaded v2 patch to address check style failure. Since it's a code refactoring 
 issue, I didn't write any test case.

> Refactor DataStreamer#createBlockOutputStream
> -
>
> Key: HDFS-10905
> URL: https://issues.apache.org/jira/browse/HDFS-10905
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HDFS-10905.001.patch, HDFS-10905.002.patch
>
>
> DataStreamer#createBlockOutputStream and DataStreamer#transfer shared much 
> boilerplate code. HDFS-10609 refactored the transfer method into a 
> StreamerStreams class. The createBlockOutputStream method should reuse the 
> class to de-dup code and to improve code clarity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2016-10-19 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588835#comment-15588835
 ] 

Daryn Sharp commented on HDFS-11026:


-1 While it's a long needed change, there must be backwards compatibility.

This patch's knife-switch approach will completely break rolling upgrades - ie. 
standard procedure is upgrade NN, upgrade DNs.  DNs not upgraded will fail to 
decode the new PB tokens issued by the upgraded NN.  This means full downtime 
for the to-be-upgraded cluster.  Running clients on other clusters accessing 
the upgraded cluster will experience major disruptions.  Neither are acceptable 
to a production environment.

This change will require via dual decoding of old Writable and new PB tokens.  
After initial integration the NN must continue issuing old Writable tokens to 
support rolling upgrades.  In some future release, preferably a major release, 
the NN can start issuing new PB tokens - while maintaining dual decode support 
to prevent disrupting clients.

> Convert BlockTokenIdentifier to use Protobuf
> 
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ewan Higgs
> Fix For: 3.0.0-alpha2
>
> Attachments: blocktokenidentifier-protobuf.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
> (basically a {{byte[]}}) and manual serialization to get data into and out of 
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded 
> more easily and will be consistent with the rest of the system.
> NB: Release of this will require a version update since 2.8.x won't be able 
> to decipher {{BlockKeyProto.keyBytes}} from 2.8.y.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9096) Issue in Rollback (after rolling upgrade) from hadoop 2.7.1 to 2.4.0

2016-10-19 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588854#comment-15588854
 ] 

Kihwal Lee commented on HDFS-9096:
--

Were you issuing rollback with 2.7.1?

> Issue in Rollback (after rolling upgrade) from hadoop 2.7.1 to 2.4.0
> 
>
> Key: HDFS-9096
> URL: https://issues.apache.org/jira/browse/HDFS-9096
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.4.0
>Reporter: Harpreet Kaur
>
> I tried to do rolling upgrade from hadoop 2.4.0 to hadoop 2.7.1. As per 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade
>  one can rollback to previous release provided the finalise step is not done. 
> I upgraded the setup but didnot finalise the upgrade and tried to rollback 
> HDFS to 2.4.0
> I tried the following steps
>   1.  Shutdown all NNs and DNs.
>   2.  Restore the pre-upgrade release in all machines.
>   3.  Start NN1 as Active with the "-rollingUpgrade 
> rollback"
>  option.
> I am getting the following error after 3rd step
> 15/09/01 17:53:35 INFO namenode.AclConfigFlag: ACLs enabled? false
> 15/09/01 17:53:35 INFO common.Storage: Lock on <>/in_use.lock 
> acquired by nodename 12152@VM-2
> 15/09/01 17:53:35 WARN namenode.FSNamesystem: Encountered exception loading 
> fsimage
> org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected 
> version of storage directory /data/yarn/namenode. Reported: -63. Expecting = 
> -56.
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageInfo.java:178)
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(StorageInfo.java:131)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(NNStorage.java:608)
> at 
> org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInfo.java:228)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:309)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:882)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:639)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:455)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:511)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:670)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:655)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1304)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1370)
> 15/09/01 17:53:35 INFO mortbay.log: Stopped 
> SelectChannelConnector@0.0.0.0:50070
> 15/09/01 17:53:35 INFO impl.MetricsSystemImpl: Stopping NameNode metrics 
> system...
> 15/09/01 17:53:35 INFO impl.MetricsSystemImpl: NameNode metrics system 
> stopped.
> 15/09/01 17:53:35 INFO impl.MetricsSystemImpl: NameNode metrics system 
> shutdown complete.
> 15/09/01 17:53:35 FATAL namenode.NameNode: Exception in namenode join
> From rolling upgrade documentation it can be inferred that rolling upgrade is 
> supported Hadoop 2.4.0 onwards but rollingUpgrade rollback to Hadoop 2.4.0 
> seems to be broken in Hadoop 2.4.0. It throws above mentioned error.
> Are there any other steps to perform rollback (from rolling upgrade) or is it 
> not supported to rollback to Hadoop 2.4.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11025) TestDiskspaceQuotaUpdate fails in trunk due to Bind exception

2016-10-19 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588702#comment-15588702
 ] 

Eric Badger commented on HDFS-11025:


+1 non binding

> TestDiskspaceQuotaUpdate fails in trunk due to Bind exception
> -
>
> Key: HDFS-11025
> URL: https://issues.apache.org/jira/browse/HDFS-11025
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11025.001.patch
>
>
> The test {{TestDiskspaceQuotaUpdate}} fails sometimes after HDFS-10843, the 
> link addresse: 
> https://builds.apache.org/job/PreCommit-HDFS-Build/17200/testReport/. The 
> stack infos:
> {code} 
> java.net.BindException: Problem binding to [localhost:49195] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> {code} 
> I found the bind exception was happened in new test method 
> {{TestDiskspaceQuotaUpdate.testQuotaIssuesWhileCommitting}}. The related 
> codes:
> {code}
>   public void testQuotaIssuesWhileCommitting() throws Exception {
> ...
> try {
>   for (int i = REPLICATION - 1; i > 0; i--) {
> dnprops.add(cluster.stopDataNode(i));
>   }
>   ...
> } finally {
>   for (MiniDFSCluster.DataNodeProperties dnprop : dnprops) {
> cluster.restartDataNode(dnprop, true);
>   }
>   cluster.waitActive();
> }
>   }
> {code}
> I think we can make a simple fix in {{cluster.restartDataNode(dnprop, 
> true);}}. The tests in {{TestDiskspaceQuotaUpdate}} just care about that if 
> the cluster is up and running. So I think this change will not influence the 
> current logic,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2016-10-19 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588694#comment-15588694
 ] 

Ewan Higgs commented on HDFS-11026:
---

No new tests were added to the patch because this is a like-for-like change 
behind the interfaces.

> Convert BlockTokenIdentifier to use Protobuf
> 
>
> Key: HDFS-11026
> URL: https://issues.apache.org/jira/browse/HDFS-11026
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: hdfs, hdfs-client
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Ewan Higgs
> Fix For: 3.0.0-alpha2
>
> Attachments: blocktokenidentifier-protobuf.patch
>
>
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} 
> (basically a {{byte[]}}) and manual serialization to get data into and out of 
> the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. 
> {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The 
> {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded 
> more easily and will be consistent with the rest of the system.
> NB: Release of this will require a version update since 2.8.x won't be able 
> to decipher {{BlockKeyProto.keyBytes}} from 2.8.y.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11033) Add documents for native raw erasure coder in XOR codes

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588693#comment-15588693
 ] 

Hadoop QA commented on HDFS-11033:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
28s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11033 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834125/HDFS-11033-v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 2b7cfa9f5eeb 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c5573e6 |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17217/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17217/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17217/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add documents for native raw erasure coder in XOR codes
> ---
>
> Key: HDFS-11033
> URL: https://issues.apache.org/jira/browse/HDFS-11033
> Project: 

[jira] [Commented] (HDFS-8411) Add bytes count metrics to datanode for ECWorker

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588653#comment-15588653
 ] 

Hadoop QA commented on HDFS-8411:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 64 unchanged - 0 fixed = 66 total (was 64) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.TestEncryptionZones |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-8411 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834145/HDFS-8411-004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3ea0fbb3dd73 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c5573e6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17216/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17216/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17216/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17216/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add bytes count metrics to datanode for ECWorker
> 
>
> Key: 

[jira] [Commented] (HDFS-10905) Refactor DataStreamer#createBlockOutputStream

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588640#comment-15588640
 ] 

Hadoop QA commented on HDFS-10905:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 1 new + 77 unchanged - 0 fixed = 78 total (was 77) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10905 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834154/HDFS-10905.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e75fc73e2cc4 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c5573e6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17218/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17218/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17218/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor DataStreamer#createBlockOutputStream
> -
>
> Key: HDFS-10905
> URL: https://issues.apache.org/jira/browse/HDFS-10905
> Project: Hadoop HDFS
>  

[jira] [Updated] (HDFS-10905) Refactor DataStreamer#createBlockOutputStream

2016-10-19 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10905:
--
Attachment: HDFS-10905.001.patch

uploaded v1 patch

> Refactor DataStreamer#createBlockOutputStream
> -
>
> Key: HDFS-10905
> URL: https://issues.apache.org/jira/browse/HDFS-10905
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HDFS-10905.001.patch
>
>
> DataStreamer#createBlockOutputStream and DataStreamer#transfer shared much 
> boilerplate code. HDFS-10609 refactored the transfer method into a 
> StreamerStreams class. The createBlockOutputStream method should reuse the 
> class to de-dup code and to improve code clarity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10905) Refactor DataStreamer#createBlockOutputStream

2016-10-19 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10905:
--
Status: Patch Available  (was: Open)

> Refactor DataStreamer#createBlockOutputStream
> -
>
> Key: HDFS-10905
> URL: https://issues.apache.org/jira/browse/HDFS-10905
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HDFS-10905.001.patch
>
>
> DataStreamer#createBlockOutputStream and DataStreamer#transfer shared much 
> boilerplate code. HDFS-10609 refactored the transfer method into a 
> StreamerStreams class. The createBlockOutputStream method should reuse the 
> class to de-dup code and to improve code clarity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8411) Add bytes count metrics to datanode for ECWorker

2016-10-19 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-8411:

Status: Patch Available  (was: Open)

> Add bytes count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8411
> URL: https://issues.apache.org/jira/browse/HDFS-8411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, 
> HDFS-8411-003.patch, HDFS-8411-004.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is 
> read from local or remote to attend decoding work, and also the amount of 
> data that is written to local or remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8411) Add bytes count metrics to datanode for ECWorker

2016-10-19 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-8411:

Attachment: HDFS-8411-004.patch

Rebase and refactor the patch

> Add bytes count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8411
> URL: https://issues.apache.org/jira/browse/HDFS-8411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: SammiChen
> Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, 
> HDFS-8411-003.patch, HDFS-8411-004.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is 
> read from local or remote to attend decoding work, and also the amount of 
> data that is written to local or remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf

2016-10-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588360#comment-15588360
 ] 

Hadoop QA commented on HDFS-11026:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 12 new + 50 unchanged - 0 fixed = 62 total (was 50) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-11026 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12834137/blocktokenidentifier-protobuf.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 6acc1ce6805c 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c5573e6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17215/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17215/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17215/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/17215/console |
| Powered by | 

  1   2   >