[jira] [Updated] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9204:

Attachment: HDFS-9204.001.patch

Thanks [~jingzhao] for useful comments. The v1 patch addresses above comments.

> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9204.000.patch, HDFS-9204.001.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947618#comment-14947618
 ] 

Zhe Zhang commented on HDFS-9209:
-

Thanks Surendra for catching the issue!

> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not sehedule corrupted blocks for replication

2015-10-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947710#comment-14947710
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9205:
---

You are right.  Both hasNext() and next() need to advance the iterators.

> Do not sehedule corrupted blocks for replication
> 
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947714#comment-14947714
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2438 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2438/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1494#comment-1494
 ] 

Hadoop QA commented on HDFS-9211:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m  1s | Pre-patch branch-2 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   6m 15s | The applied patch generated  3  
additional warning messages. |
| {color:green}+1{color} | javadoc |  10m 21s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 18s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | native |   1m 37s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests |   0m 47s | Tests passed in 
hadoop-hdfs-native-client. |
| | |  37m 19s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765469/HDFS-9211-branch-2.001.patch
 |
| Optional Tests | javadoc javac unit |
| git revision | branch-2 / ad1f0f3 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12843/artifact/patchprocess/diffJavacWarnings.txt
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12843/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs-native-client test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12843/artifact/patchprocess/testrun_hadoop-hdfs-native-client.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12843/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12843/console |


This message was automatically generated.

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9204:

Status: Patch Available  (was: Open)

> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9204.000.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9204:

Attachment: HDFS-9204.000.patch

The v0 patch:
# Calls the {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}} 
when {{ReplicationWork}} is constructed.
# Renames {{PendingReplicationWithoutTargets}} to 
{{pendingReplicationWithoutTargets}}
# In unit test, asserts that {{pendingReplicationWithoutTargets}} should be 
non-negative.

> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9204.000.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947629#comment-14947629
 ] 

Hadoop QA commented on HDFS-9176:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   7m 43s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 47s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 18s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 21s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  3s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 161m 31s | Tests failed in hadoop-hdfs. |
| | | 184m 19s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | org.apache.hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765261/HDFS-9176.002.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 99e5204 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12837/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12837/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12837/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12837/console |


This message was automatically generated.

> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9145) Tracking methods that hold FSNamesytemLock for too long

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9145:

Attachment: HDFS-9145.001.patch

The v1 patch implements the write lock with re-enter support. The read lock was 
removed from this patch as the holding time should be thread-aware.

> Tracking methods that hold FSNamesytemLock for too long
> ---
>
> Key: HDFS-9145
> URL: https://issues.apache.org/jira/browse/HDFS-9145
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9145.000.patch, HDFS-9145.001.patch
>
>
> It will be helpful that if we can have a way to track (or at least log a msg) 
> if some operation is holding the FSNamesystem lock for a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-07 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947678#comment-14947678
 ] 

Zhe Zhang commented on HDFS-9204:
-

Thanks Jing for noticing this and Mingliang for the work. I might have 
overwritten the HDFS-7128 in git merge.

> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9204.000.patch, HDFS-9204.001.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947718#comment-14947718
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9205:
---

The failure of TestReadOnlySharedStorage actually is related -- the current 
implementation of read-only storage breaks the corrupt block definition.  It 
treats blocks with read-only replicas but no normal replicas as corrupt 
replicas.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9199) rename dfs.namenode.replication.min to dfs.replication.min

2015-10-07 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947771#comment-14947771
 ] 

Mingliang Liu commented on HDFS-9199:
-

I'll close this if no more input in 3 days.Thanks.

> rename dfs.namenode.replication.min to dfs.replication.min
> --
>
> Key: HDFS-9199
> URL: https://issues.apache.org/jira/browse/HDFS-9199
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Mingliang Liu
>
> dfs.namenode.replication.min should be dfs.replication.min to match the other 
> dfs.replication config knobs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9211:
-
Summary: branch-2 build broken by incorrect version in 
hadoop-hdfs-native-client/pom.xml   (was: branch-2 broken by incorrect version 
in hadoop-hdfs-native-client/pom.xml )

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Payne
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9211) branch-2 broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Eric Payne (JIRA)
Eric Payne created HDFS-9211:


 Summary: branch-2 broken by incorrect version in 
hadoop-hdfs-native-client/pom.xml 
 Key: HDFS-9211
 URL: https://issues.apache.org/jira/browse/HDFS-9211
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eric Payne


When HDFS-9170 was backported to branch-2, the version in 
hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9205:
--
Summary: Do not schedule corrupt blocks for replication  (was: Do not 
sehedule corrupted blocks for replication)

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9188) Make block corruption related tests FsDataset-agnostic.

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947751#comment-14947751
 ] 

Hadoop QA commented on HDFS-9188:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   8m  7s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 12 new or modified test files. |
| {color:green}+1{color} | javac |   7m 56s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  1s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 33s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m  4s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 191m  4s | Tests failed in hadoop-hdfs. |
| | | 214m  7s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
|   | hadoop.hdfs.server.namenode.TestProcessCorruptBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765096/HDFS-9188.003.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 99e5204 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12840/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12840/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12840/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12840/console |


This message was automatically generated.

> Make block corruption related tests FsDataset-agnostic. 
> 
>
> Key: HDFS-9188
> URL: https://issues.apache.org/jira/browse/HDFS-9188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: HDFS, test
>Affects Versions: 2.7.1
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-9188.000.patch, HDFS-9188.001.patch, 
> HDFS-9188.002.patch, HDFS-9188.003.patch
>
>
> Currently, HDFS does block corruption tests by directly accessing the files 
> stored on the storage directories, which assumes {{FsDatasetImpl}} is the 
> dataset implementation. However, with works like OZone (HDFS-7240) and 
> HDFS-8679, there will be different FsDataset implementations. 
> So we need a general way to run whitebox tests like corrupting blocks and crc 
> files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-07 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947531#comment-14947531
 ] 

Jing Zhao commented on HDFS-9204:
-

The patch looks good to me. Some minors:
# We can add an assertion here to make sure there is exactly one element in 
srcNodes.
{code}
getSrcNodes()[0].incrementPendingReplicationWithoutTargets();
{code}
# We can use Whitebox to access {{pendingReplicationWithoutTargets}} instead of 
changing its access label.

+1 after addressing the comments.

> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9204.000.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-07 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-9210:


 Summary: Fix some misuse of %n in VolumeScanner#printStats
 Key: HDFS-9210
 URL: https://issues.apache.org/jira/browse/HDFS-9210
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor


Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
below. This JIRA is opened to fix the format issue.

{code}

Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
with base path /hadoop/hdfs/data%nBytes verified in last hour   :   
  136882014
Blocks scanned in current period  : 
5
Blocks scanned since restart  : 
5
Block pool scans since restart: 
0
Block scan errors since restart   : 
0
Hours until next block pool scan  : 
  476.000
Last block scanned: 
BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
More blocks to scan in period : 
false
%n
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
Affects Version/s: 2.7.1

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9210:
-
Component/s: datanode

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947809#comment-14947809
 ] 

Hudson commented on HDFS-9209:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #504 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/504/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947657#comment-14947657
 ] 

Hadoop QA commented on HDFS-9209:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   8m 54s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m  4s | There were no new javac warning 
messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 38s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 45s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 50s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   1m 18s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 230m 46s | Tests failed in hadoop-hdfs. |
| | | 257m 14s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.TestGenericRefresh |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
| Timed out tests | org.apache.hadoop.hdfs.TestDFSUpgradeFromImage |
|   | org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765423/HDFS-9209.patch |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 99e5204 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12836/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12836/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12836/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12836/console |


This message was automatically generated.

> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947692#comment-14947692
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #503 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/503/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947759#comment-14947759
 ] 

Zhe Zhang commented on HDFS-9205:
-

Thanks Nicholas for the work. A few comments:
# bq. As a consequence, they cannot be replicated
Just to clarify, do you mean that even without the patch, those blocks won't be 
re-replicated, even though {{chooseUnderReplicatedBlocks}} returns them? Or 
they are re-replicated in the current logic, but they should not be (IIUC 
that's the case)?
# I agree that corrupt blocks are unreadable by HDFS client. But is there a use 
case for an admin to list corrupt blocks and reason about them by accessing the 
local {{blk_}} (and metadata) files? For example, there's a chance (although 
very rare) that the replica is intact and only the metadata file is corrupt.
# If we do want to save the replication work for corrupt blocks, should we get 
rid of {{QUEUE_WITH_CORRUPT_BLOCKS}} altogether?

Nit:
# This line of comment should be updated:
{code}
// and 5 blocks from QUEUE_WITH_CORRUPT_BLOCKS.
{code}

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not sehedule corrupted blocks for replication

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947497#comment-14947497
 ] 

Hadoop QA commented on HDFS-9205:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 56s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 23s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 35s | The applied patch generated  4 
new checkstyle issues (total was 198, now 199). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 46s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 39s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 58s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 39s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 232m 18s | Tests failed in hadoop-hdfs. |
| | | 284m 38s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.TestDataTransferKeepalive |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765399/h9205_20151007b.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 99e5204 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12835/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12835/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12835/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12835/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12835/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12835/console |


This message was automatically generated.

> Do not sehedule corrupted blocks for replication
> 
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not sehedule corrupted blocks for replication

2015-10-07 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947566#comment-14947566
 ] 

Jing Zhao commented on HDFS-9205:
-

I think it makes sense to skip scheduling replication for corrupted blocks. For 
the patch, one comment is that in the following code the for loop in {{next}} 
actually should be in {{hasNext}} method. Other than this the patch looks good 
to me.
{code}
  @Override
  public BlockInfo next() {
for(; !b.hasNext() && q.hasNext(); ) {
  b = q.next().iterator();
}
return b.next();
  }

  @Override
  public boolean hasNext() {
return b.hasNext();
  }
{code}

> Do not sehedule corrupted blocks for replication
> 
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9170) Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client

2015-10-07 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947690#comment-14947690
 ] 

Eric Payne commented on HDFS-9170:
--

[~wheat9], the backport of this patch to branch-2 broke the build due to 
version in hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml. Please see 
HDFS-9211

> Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client
> ---
>
> Key: HDFS-9170
> URL: https://issues.apache.org/jira/browse/HDFS-9170
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HDFS-9170.000.patch, HDFS-9170.001.patch, 
> HDFS-9170.002.patch, HDFS-9170.003.patch, HDFS-9170.004.patch
>
>
> After HDFS-6200 the Java implementation of hdfs-client has be moved to a 
> separate hadoop-hdfs-client module.
> libhdfs, fuse-dfs and libwebhdfs still reside in the hadoop-hdfs module. 
> Ideally these modules should reside in the hadoop-hdfs-client. However, to 
> write unit tests for these components, it is often necessary to run 
> MiniDFSCluster which resides in the hadoop-hdfs module.
> This jira is to discuss how these native modules should layout after 
> HDFS-6200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reassigned HDFS-9211:


Assignee: Eric Payne

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947735#comment-14947735
 ] 

Hadoop QA commented on HDFS-9137:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 23s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  1s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 53s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 35s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 17s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 23s | The applied patch generated  1 
new checkstyle issues (total was 142, now 142). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 189m 28s | Tests failed in hadoop-hdfs. |
| | | 235m 50s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestRecoverStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765432/HDFSS-9137.02.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 99e5204 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12838/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12838/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12838/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12838/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12838/console |


This message was automatically generated.

> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9208) Disabling atime may fail clients like distCp

2015-10-07 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947503#comment-14947503
 ] 

Kihwal Lee commented on HDFS-9208:
--

I also like option #3.

> Disabling atime may fail clients like distCp
> 
>
> Key: HDFS-9208
> URL: https://issues.apache.org/jira/browse/HDFS-9208
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Mingliang Liu
>
> When atime is disabled, {{setTimes()}} throws an exception if the passed-in 
> atime is not -1.  But since atime is 0, distCp fails when it tries to set the 
> mtime and atime. 
> There are several options:
> 1) make distCp check for 0 atime and call {{setTimes()}} with -1. I am not 
> very enthusiastic about it.
> 2) make NN also accept 0 atime in addition to -1, when the atime support is 
> disabled.
> 3) support setting mtime & atime regardless of the atime support.  The main 
> reason why atime is disabled is to avoid edit logging/syncing during 
> {{getBlockLocations()}} read calls. Explicit setting can be allowed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9176:

   Resolution: Fixed
Fix Version/s: 2.8.0
   3.0.0
   Status: Resolved  (was: Patch Available)

+1.  Thanks for the bug fix, [~templedf].

> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9211:
-
Status: Patch Available  (was: Open)

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9209:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

The failed tests and release audit warning are unrelated. I've committed the 
patch to trunk. Thanks for the contribution, [~surendrasingh]!

> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947719#comment-14947719
 ] 

Hudson commented on HDFS-9209:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8590 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8590/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947661#comment-14947661
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8589 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8589/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-9211:
-
Attachment: HDFS-9211-branch-2.001.patch

The error is as follows:
{noformat}
The project org.apache.hadoop:hadoop-hdfs-native-client:3.0.0-SNAPSHOT 
(.../hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml) has 1 error
{noformat}

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-9205:
--
Attachment: h9205_20151008.patch

h9205_20151008.patch: treats block with readonly replicas but no normal 
replicas as highest priority for replication.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946412#comment-14946412
 ] 

Hudson commented on HDFS-9206:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8584 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8584/])
HDFS-9206. Inconsistent default value of (jing9: rev 
8e53311ca20547cdd6658a77f3cdf05e6212855a)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Inconsistent default value of dfs.datanode.stripedread.buffer.size
> --
>
> Key: HDFS-9206
> URL: https://issues.apache.org/jira/browse/HDFS-9206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9206.patch
>
>
> {noformat}
> DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;
> 
>   dfs.datanode.stripedread.buffer.size
>   262144
>   Datanode striped read buffer size.
>   
> 
> {noformat}
> Once before we used 256k cellSize, now we changed to 64k as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9170) Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client

2015-10-07 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9170:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks Jing for the reviews.

> Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client
> ---
>
> Key: HDFS-9170
> URL: https://issues.apache.org/jira/browse/HDFS-9170
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.8.0
>
> Attachments: HDFS-9170.000.patch, HDFS-9170.001.patch, 
> HDFS-9170.002.patch, HDFS-9170.003.patch, HDFS-9170.004.patch
>
>
> After HDFS-6200 the Java implementation of hdfs-client has be moved to a 
> separate hadoop-hdfs-client module.
> libhdfs, fuse-dfs and libwebhdfs still reside in the hadoop-hdfs module. 
> Ideally these modules should reside in the hadoop-hdfs-client. However, to 
> write unit tests for these components, it is often necessary to run 
> MiniDFSCluster which resides in the hadoop-hdfs module.
> This jira is to discuss how these native modules should layout after 
> HDFS-6200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9205) Do not sehedule corrupted blocks for replication

2015-10-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-9205:
--
Attachment: h9205_20151007.patch

h9205_20151007.patch: do not choose corrupted blocks.

> Do not sehedule corrupted blocks for replication
> 
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-07 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9206:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

The failed tests are unrelated. I've committed this to trunk. Thanks for the 
contribution, Walter! 

> Inconsistent default value of dfs.datanode.stripedread.buffer.size
> --
>
> Key: HDFS-9206
> URL: https://issues.apache.org/jira/browse/HDFS-9206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HDFS-9206.patch
>
>
> {noformat}
> DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;
> 
>   dfs.datanode.stripedread.buffer.size
>   262144
>   Datanode striped read buffer size.
>   
> 
> {noformat}
> Once before we used 256k cellSize, now we changed to 64k as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-07 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9206:

Priority: Minor  (was: Trivial)

> Inconsistent default value of dfs.datanode.stripedread.buffer.size
> --
>
> Key: HDFS-9206
> URL: https://issues.apache.org/jira/browse/HDFS-9206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9206.patch
>
>
> {noformat}
> DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;
> 
>   dfs.datanode.stripedread.buffer.size
>   262144
>   Datanode striped read buffer size.
>   
> 
> {noformat}
> Once before we used 256k cellSize, now we changed to 64k as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2015-10-07 Thread Ajith S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajith S updated HDFS-4167:
--
Status: Open  (was: Patch Available)

> Add support for restoring/rolling back to a snapshot
> 
>
> Key: HDFS-4167
> URL: https://issues.apache.org/jira/browse/HDFS-4167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Ajith S
> Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
> HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch, 
> HDFS-4167.05.patch, HDFS-4167.06.patch
>
>
> This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946391#comment-14946391
 ] 

Hudson commented on HDFS-9182:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #1228 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1228/])
HDFS-9182. Cleanup the findbugs and other issues after HDFS EC merged to 
(umamahesh: rev 8b7339312cb06b7e021f8f9ea6e3a20ebf009af3)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml


> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFSS-9182.00.patch, HDFSS-9182.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8802) dfs.checksum.type is not described in hdfs-default.xml

2015-10-07 Thread Gururaj Shetty (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946424#comment-14946424
 ] 

Gururaj Shetty commented on HDFS-8802:
--

Hi [~ozawa]

Kindly review the patch and let me know for any changes to be done.

Regards,
Gururaj

> dfs.checksum.type is not described in hdfs-default.xml
> --
>
> Key: HDFS-8802
> URL: https://issues.apache.org/jira/browse/HDFS-8802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Tsuyoshi Ozawa
>Assignee: Gururaj Shetty
> Attachments: HDFS-8802.patch, HDFS-8802_01.patch, HDFS-8802_02.patch
>
>
> It's a good timing to check other configurations about hdfs-default.xml here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9205) Do not sehedule corrupted blocks for replication

2015-10-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-9205:
--
Status: Patch Available  (was: Open)

> Do not sehedule corrupted blocks for replication
> 
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946371#comment-14946371
 ] 

Hudson commented on HDFS-9182:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #491 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/491/])
HDFS-9182. Cleanup the findbugs and other issues after HDFS EC merged to 
(umamahesh: rev 8b7339312cb06b7e021f8f9ea6e3a20ebf009af3)
* hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java


> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFSS-9182.00.patch, HDFSS-9182.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946450#comment-14946450
 ] 

Hudson commented on HDFS-9182:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #499 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/499/])
HDFS-9182. Cleanup the findbugs and other issues after HDFS EC merged to 
(umamahesh: rev 8b7339312cb06b7e021f8f9ea6e3a20ebf009af3)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java


> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFSS-9182.00.patch, HDFSS-9182.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946418#comment-14946418
 ] 

Rakesh R commented on HDFS-8632:


HDFS-9182 has modified some of the files. Again attaching new re-based patch on 
latest trunk code.

cc: [~zhz], [~andrew.wang]

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-05-rebase.patch, HDFS-8632-05.patch, 
> HDFS-8632-HDFS-7285-00.patch, HDFS-8632-HDFS-7285-01.patch, 
> HDFS-8632-HDFS-7285-02.patch, HDFS-8632-HDFS-7285-03.patch, 
> HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9095) RPC client should fail gracefully when the connection is timed out or reset

2015-10-07 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9095:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-8707
Target Version/s: HDFS-8707
  Status: Resolved  (was: Patch Available)

Committed to the HDFS-8707 branch. Thanks for the reviews.

> RPC client should fail gracefully when the connection is timed out or reset
> ---
>
> Key: HDFS-9095
> URL: https://issues.apache.org/jira/browse/HDFS-9095
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: HDFS-8707
>
> Attachments: HDFS-9095.000.patch, HDFS-9095.001.patch
>
>
> The RPC client should fail gracefully when the connection is timed out or 
> reset. instead of bailing out. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9170) Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946500#comment-14946500
 ] 

Hudson commented on HDFS-9170:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2435 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2435/])
HDFS-9170. Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client. (wheat9: rev 
3112f263688be6bf830c8386040f000be18f95da)
* hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/expect.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/os/windows/thread_local_storage.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_chmod.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs_test.h
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_init.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/resources/FindJansson.cmake
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_create.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_options.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/exception.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/fuse_workload.c
* hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/test_libwebhdfs_write.c
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_ops.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_write.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/common/htable.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/os/windows/inttypes.h
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_getattr.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/exception.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs_test.h
* hadoop-hdfs-project/hadoop-hdfs-native-client/src/config.h.cmake
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_users.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/test_native_mini_dfs.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_htable.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_release.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_access.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_truncate.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_flush.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_native_mini_dfs.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_statfs.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/os/thread.h
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_threaded.c
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/thread_local_storage.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/CMakeLists.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_create.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/test_libwebhdfs_threaded.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/tree.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_symlink.c
* hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls.h
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_mknod.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.c
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/test_libwebhdfs_read.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/util/posix_util.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_write.c
* 

[jira] [Commented] (HDFS-9196) Fix TestWebHdfsContentLength

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946498#comment-14946498
 ] 

Hudson commented on HDFS-9196:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2435 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2435/])
HDFS-9196. Fix TestWebHdfsContentLength. Contributed by Masatake (jing9: rev 
239d119c6707e58c9a5e0099c6d65fe956e95140)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix TestWebHdfsContentLength
> 
>
> Key: HDFS-9196
> URL: https://issues.apache.org/jira/browse/HDFS-9196
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-9196.001.patch
>
>
> {quote}
> Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 181.278 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
> testPutOp(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  Time elapsed: 
> 60.05 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOp(TestWebHdfsContentLength.java:116)
> testPutOpWithRedirect(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  
> Time elapsed: 0.01 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[chunked]> but was:<[0]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOpWithRedirect(TestWebHdfsContentLength.java:130)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9207) Move the implementation to the hdfs-native-client module

2015-10-07 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-9207:


 Summary: Move the implementation to the hdfs-native-client module
 Key: HDFS-9207
 URL: https://issues.apache.org/jira/browse/HDFS-9207
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai


The implementation of libhdfspp should be moved to the new hdfs-native-client 
module as HDFS-9170 has landed in trunk and branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9206) Inconsistent default value of dfs.datanode.stripedread.buffer.size

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946499#comment-14946499
 ] 

Hudson commented on HDFS-9206:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2435 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2435/])
HDFS-9206. Inconsistent default value of (jing9: rev 
8e53311ca20547cdd6658a77f3cdf05e6212855a)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


> Inconsistent default value of dfs.datanode.stripedread.buffer.size
> --
>
> Key: HDFS-9206
> URL: https://issues.apache.org/jira/browse/HDFS-9206
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Walter Su
>Assignee: Walter Su
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9206.patch
>
>
> {noformat}
> DFS_DATANODE_STRIPED_READ_BUFFER_SIZE_DEFAULT = 64 * 1024;
> 
>   dfs.datanode.stripedread.buffer.size
>   262144
>   Datanode striped read buffer size.
>   
> 
> {noformat}
> Once before we used 256k cellSize, now we changed to 64k as default value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9182) Cleanup the findbugs and other issues after HDFS EC merged to trunk.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946496#comment-14946496
 ] 

Hudson commented on HDFS-9182:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2435 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2435/])
HDFS-9182. Cleanup the findbugs and other issues after HDFS EC merged to 
(umamahesh: rev 8b7339312cb06b7e021f8f9ea6e3a20ebf009af3)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java


> Cleanup the findbugs and other issues after HDFS EC merged to trunk.
> 
>
> Key: HDFS-9182
> URL: https://issues.apache.org/jira/browse/HDFS-9182
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFSS-9182.00.patch, HDFSS-9182.01.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs-client.html
> https://builds.apache.org/job/PreCommit-HDFS-Build/12754/artifact/patchprocess/patchReleaseAuditProblems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9196) Fix TestWebHdfsContentLength

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946411#comment-14946411
 ] 

Hudson commented on HDFS-9196:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8584 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8584/])
HDFS-9196. Fix TestWebHdfsContentLength. Contributed by Masatake (jing9: rev 
239d119c6707e58c9a5e0099c6d65fe956e95140)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java


> Fix TestWebHdfsContentLength
> 
>
> Key: HDFS-9196
> URL: https://issues.apache.org/jira/browse/HDFS-9196
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HDFS-9196.001.patch
>
>
> {quote}
> Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 181.278 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
> testPutOp(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  Time elapsed: 
> 60.05 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOp(TestWebHdfsContentLength.java:116)
> testPutOpWithRedirect(org.apache.hadoop.hdfs.web.TestWebHdfsContentLength)  
> Time elapsed: 0.01 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[chunked]> but was:<[0]>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.web.TestWebHdfsContentLength.testPutOpWithRedirect(TestWebHdfsContentLength.java:130)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8632:
---
Attachment: HDFS-8632-05-rebase.patch

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-05-rebase.patch, HDFS-8632-05.patch, 
> HDFS-8632-HDFS-7285-00.patch, HDFS-8632-HDFS-7285-01.patch, 
> HDFS-8632-HDFS-7285-02.patch, HDFS-8632-HDFS-7285-03.patch, 
> HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9170) Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946483#comment-14946483
 ] 

Hudson commented on HDFS-9170:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8585 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8585/])
HDFS-9170. Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client. (wheat9: rev 
3112f263688be6bf830c8386040f000be18f95da)
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_flush.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_access.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/posix_util.h
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/test/fuse_workload.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_users.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/platform.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_open.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/posix_util.c
* hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_stat_struct.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_rename.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_flush.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_trash.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_stat_struct.h
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/os/windows/thread.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/exception.h
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_native_mini_dfs.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_query.h
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_query.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/test/test_libhdfs_zerocopy.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_users.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/test/test_fuse_dfs.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_json_parser.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_symlink.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_create.c
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_threaded.c
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/CMakeLists.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls.h
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_statfs.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_web.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/contrib/libwebhdfs/src/hdfs_http_client.h
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_stat_struct.c
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_file_handle.h
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_dfs.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.h
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_create.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_impls_mknod.c
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_impls_readdir.c
* hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.h
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/common/htable.c
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/windows/thread_local_storage.c
* 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_init.c
* hadoop-hdfs-project/hadoop-hdfs-native-client/src/config.h.cmake
* hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/platform.h
* hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
* 

[jira] [Commented] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-10-07 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946524#comment-14946524
 ] 

Vinayakumar B commented on HDFS-9159:
-

Latest patch looks good to me.
+1

> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-9159_01.patch, HDFS-9159_02.patch, 
> HDFS-9159_03.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9181) Better handling of exceptions thrown during upgrade shutdown

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946362#comment-14946362
 ] 

Hadoop QA commented on HDFS-9181:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 48s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 54s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 33s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 27s | The applied patch generated  2 
new checkstyle issues (total was 142, now 142). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 33s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 201m  1s | Tests failed in hadoop-hdfs. |
| | | 247m 53s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHdfsContentLength |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765309/HDFS-9181.003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 1bca1bb |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12824/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12824/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12824/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12824/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12824/console |


This message was automatically generated.

> Better handling of exceptions thrown during upgrade shutdown
> 
>
> Key: HDFS-9181
> URL: https://issues.apache.org/jira/browse/HDFS-9181
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-9181.002.patch, HDFS-9181.003.patch
>
>
> Previously in HDFS-7533, a bug was fixed by suppressing exceptions during 
> upgrade shutdown. It may be appropriate as a temporary fix, but it would be 
> better if the exception is handled in some other ways.
> One way to handle it is by emitting a warning message. There could exist 
> other ways to handle it. This jira is created to discuss how to handle this 
> case better.
> Thanks to [~templedf] for bringing this up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9170) Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client

2015-10-07 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946380#comment-14946380
 ] 

Jing Zhao commented on HDFS-9170:
-

The test failure should be unrelated. +1.

> Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client
> ---
>
> Key: HDFS-9170
> URL: https://issues.apache.org/jira/browse/HDFS-9170
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-9170.000.patch, HDFS-9170.001.patch, 
> HDFS-9170.002.patch, HDFS-9170.003.patch, HDFS-9170.004.patch
>
>
> After HDFS-6200 the Java implementation of hdfs-client has be moved to a 
> separate hadoop-hdfs-client module.
> libhdfs, fuse-dfs and libwebhdfs still reside in the hadoop-hdfs module. 
> Ideally these modules should reside in the hadoop-hdfs-client. However, to 
> write unit tests for these components, it is often necessary to run 
> MiniDFSCluster which resides in the hadoop-hdfs module.
> This jira is to discuss how these native modules should layout after 
> HDFS-6200.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14946521#comment-14946521
 ] 

Hadoop QA commented on HDFS-8449:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 40s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 56s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 27s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 27s | The applied patch generated  2 
new checkstyle issues (total was 91, now 93). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 204m 16s | Tests failed in hadoop-hdfs. |
| | | 250m 55s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.web.TestWebHdfsContentLength |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12764382/HDFS-8449-003.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 9156fc6 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12827/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12827/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12827/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12827/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12827/console |


This message was automatically generated.

> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9159) [OIV] : return value of the command is not correct if invalid value specified in "-p (processor)" option

2015-10-07 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-9159:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2
Thanks [~nijel] for contribution.

> [OIV] : return value of the command is not correct if invalid value specified 
> in "-p (processor)" option
> 
>
> Key: HDFS-9159
> URL: https://issues.apache.org/jira/browse/HDFS-9159
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: nijel
>Assignee: nijel
> Fix For: 2.8.0
>
> Attachments: HDFS-9159_01.patch, HDFS-9159_02.patch, 
> HDFS-9159_03.patch
>
>
> Return value of the IOV command is not correct if invalid value specified in 
> "-p (processor)" option
> this needs to return error to user.
> code change will be in switch statement of
> {code}
>  try (PrintStream out = outputFile.equals("-") ?
> System.out : new PrintStream(outputFile, "UTF-8")) {
>   switch (processor) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947896#comment-14947896
 ] 

Andrew Wang commented on HDFS-9110:
---

Also I just renamed the JIRA summary to be more descriptive :)

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9211:
--
Affects Version/s: 2.8.0

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947913#comment-14947913
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2406 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2406/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947955#comment-14947955
 ] 

Hadoop QA commented on HDFS-9204:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 52s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m 12s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 31s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 27s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 37s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 23s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 222m  1s | Tests failed in hadoop-hdfs. |
| | | 268m 33s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
| Timed out tests | org.apache.hadoop.hdfs.TestDecommission |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765462/HDFS-9204.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7fbf69b |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12842/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12842/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12842/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12842/console |


This message was automatically generated.

> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9204.000.patch, HDFS-9204.001.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947957#comment-14947957
 ] 

Hudson commented on HDFS-8632:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8591 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8591/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/XORErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/grouper/BlockGrouper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawErasureCoderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/RSUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureEncodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/AbstractErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 

[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948002#comment-14948002
 ] 

Hudson commented on HDFS-8632:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #497 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/497/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/DumpUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/AbstractErasureCodec.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureDecodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/XORErasureCodec.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/grouper/BlockGrouper.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureEncodingStep.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 

[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948003#comment-14948003
 ] 

Hudson commented on HDFS-9137:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #497 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/497/])
HDFS-9137. DeadLock between DataNode#refreshVolumes and (yliu: rev 
35affec38e17e3f9c21d36be71476072c03f)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: 2.8.0
>
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948009#comment-14948009
 ] 

Rakesh R commented on HDFS-8632:


Thank you [~andrew.wang], [~zhz], [~walter.k.su] for the help in resolving this!

> Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: 3.0.0
>
> Attachments: HDFS-8632-05-rebase.patch, HDFS-8632-05.patch, 
> HDFS-8632-HDFS-7285-00.patch, HDFS-8632-HDFS-7285-01.patch, 
> HDFS-8632-HDFS-7285-02.patch, HDFS-8632-HDFS-7285-03.patch, 
> HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948072#comment-14948072
 ] 

Hadoop QA commented on HDFS-9210:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 23s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 11s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 22s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 29s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 41s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 37s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 35s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 188m 27s | Tests failed in hadoop-hdfs. |
| | | 235m 25s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765487/HDFS-9210.00.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / fde729f |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12846/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12846/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12846/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12846/console |


This message was automatically generated.

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9114) NameNode and DataNode metric log file name should follow the other log file name format.

2015-10-07 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948095#comment-14948095
 ] 

Surendra Singh Lilhore commented on HDFS-9114:
--

Thanks [~aw] for initiating  discussion in Dev group

HDFS-8880 and HDFS-8953 added separate log4j configuration for metrics logger
{code}
# NameNode metrics logging.
# The default is to retain two namenode-metrics.log files up to 64MB each.
#
namenode.metrics.logger=INFO,NullAppender
log4j.logger.NameNodeMetricsLog=${namenode.metrics.logger}
log4j.additivity.NameNodeMetricsLog=false
log4j.appender.NNMETRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.NNMETRICSRFA.File=${hadoop.log.dir}/namenode-metrics.log
log4j.appender.NNMETRICSRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.NNMETRICSRFA.layout.ConversionPattern=%d{ISO8601}%m%n
log4j.appender.NNMETRICSRFA.MaxBackupIndex=1
log4j.appender.NNMETRICSRFA.MaxFileSize=64MB
#
# DataNode metrics logging.
# The default is to retain two datanode-metrics.log files up to 64MB each.
#
datanode.metrics.logger=INFO,NullAppender
log4j.logger.DataNodeMetricsLog=${datanode.metrics.logger}
log4j.additivity.DataNodeMetricsLog=false
log4j.appender.DNMETRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DNMETRICSRFA.File=${hadoop.log.dir}/datanode-metrics.log
log4j.appender.DNMETRICSRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.DNMETRICSRFA.layout.ConversionPattern=%d{ISO8601}%m%n
log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
{code}

In yarn also one jira is there for yarn metrics YARN-4192.

Through this jira I just want to make common configuration for all this logger.
For example:
{code}
# Metrics logging.
#
metrics.logger=INFO,NullAppender
hadoop.metrics.log.file=metrics.log
log4j.logger.MetricsLog=${metrics.logger}
log4j.additivity.MetricsLog=false
log4j.appender.METRICSRFA=org.apache.log4j.RollingFileAppender
log4j.appender.METRICSRFA.File=${hadoop.log.dir}/${hadoop.metrics.log.file}
log4j.appender.METRICSRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.METRICSRFA.layout.ConversionPattern=%d{ISO8601} %m%n
log4j.appender.METRICSRFA.MaxBackupIndex=1
log4j.appender.METRICSRFA.MaxFileSize=64MB
{code}

[~arpitagarwal], [~aw], [~vinayrpet]
I you feel it’s not required then we can close this jira..

> NameNode and DataNode metric log file name should follow the other log file 
> name format.
> 
>
> Key: HDFS-9114
> URL: https://issues.apache.org/jira/browse/HDFS-9114
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-9114-branch-2.01.patch, 
> HDFS-9114-branch-2.02.patch, HDFS-9114-trunk.01.patch, 
> HDFS-9114-trunk.02.patch
>
>
> Currently datanode and namenode metric log file name is 
> {{datanode-metrics.log}} and {{namenode-metrics.log}}.
> This file name should be like {{hadoop-hdfs-namenode-metric-host192.log}} 
> same as namenode log file {{hadoop-hdfs-namenode-host192.log}}.
> This will help when we will copy log for issue analysis from different node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in doPreUpgrade for better efficiency

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9110:
--
Summary: Use Files.walkFileTree in doPreUpgrade for better efficiency  
(was: Improve upon HDFS-8480)

> Use Files.walkFileTree in doPreUpgrade for better efficiency
> 
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9110) Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9110:
--
Summary: Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better 
efficiency  (was: Use Files.walkFileTree in doPreUpgrade for better efficiency)

> Use Files.walkFileTree in NNUpgradeUtil#doPreUpgrade for better efficiency
> --
>
> Key: HDFS-9110
> URL: https://issues.apache.org/jira/browse/HDFS-9110
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Charlie Helin
>Assignee: Charlie Helin
>Priority: Minor
> Attachments: HDFS-9110.00.patch, HDFS-9110.01.patch, 
> HDFS-9110.02.patch, HDFS-9110.03.patch, HDFS-9110.04.patch, HDFS-9110.05.patch
>
>
> This is a request to do some cosmetic improvements on top of HDFS-8480. There 
> a couple of File -> java.nio.file.Path conversions which is a little bit 
> distracting. 
> The second aspect is more around efficiency, to be perfectly honest I'm not 
> sure what the number of files that may be processed. However as HDFS-8480 
> eludes to it appears that this number could be significantly large. 
> The current implementation is basically a collect and process where all files 
> first is being examined; put into a collection and after that processed. 
> HDFS-8480 could simply be further enhanced by employing a single iteration 
> without creating an intermediary collection of filenames by using a FileWalker



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8632:
--
Summary: Add InterfaceAudience annotation to the erasure coding classes  
(was: Erasure Coding: Add InterfaceAudience annotation to the erasure coding 
classes)

> Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-05-rebase.patch, HDFS-8632-05.patch, 
> HDFS-8632-HDFS-7285-00.patch, HDFS-8632-HDFS-7285-01.patch, 
> HDFS-8632-HDFS-7285-02.patch, HDFS-8632-HDFS-7285-03.patch, 
> HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947907#comment-14947907
 ] 

Andrew Wang commented on HDFS-8632:
---

+1 LGTM will commit shortly, thanks Rakesh!

> Erasure Coding: Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8632-05-rebase.patch, HDFS-8632-05.patch, 
> HDFS-8632-HDFS-7285-00.patch, HDFS-8632-HDFS-7285-01.patch, 
> HDFS-8632-HDFS-7285-02.patch, HDFS-8632-HDFS-7285-03.patch, 
> HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947954#comment-14947954
 ] 

Hudson commented on HDFS-9209:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #496 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/496/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947953#comment-14947953
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #496 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/496/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948014#comment-14948014
 ] 

Hudson commented on HDFS-8632:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #505 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/505/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/RSUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureEncodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/grouper/BlockGrouper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureEncoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/DumpUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* 

[jira] [Updated] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9167:

Attachment: HDFS-9167.000.patch

> Update pom.xml in other modules to depend on hdfs-client instead of hdfs
> 
>
> Key: HDFS-9167
> URL: https://issues.apache.org/jira/browse/HDFS-9167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9167.000.patch
>
>
> Since now the implementation of the client has been moved to the 
> hadoop-hdfs-client, we should update the poms of other modules in hadoop to 
> use hdfs-client instead of hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9167:

Status: Patch Available  (was: Open)

> Update pom.xml in other modules to depend on hdfs-client instead of hdfs
> 
>
> Key: HDFS-9167
> URL: https://issues.apache.org/jira/browse/HDFS-9167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
> Attachments: HDFS-9167.000.patch
>
>
> Since now the implementation of the client has been moved to the 
> hadoop-hdfs-client, we should update the poms of other modules in hadoop to 
> use hdfs-client instead of hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948064#comment-14948064
 ] 

Hudson commented on HDFS-8632:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2440 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2440/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/AbstractErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/XORErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/RSUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureEncodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/DumpUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureDecodingStep.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
* 

[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947827#comment-14947827
 ] 

Hudson commented on HDFS-9209:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1232 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1232/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9211) branch-2 build broken by incorrect version in hadoop-hdfs-native-client/pom.xml

2015-10-07 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947900#comment-14947900
 ] 

Andrew Wang commented on HDFS-9211:
---

+1 LGTM, will commit shortly, thanks Eric

> branch-2 build broken by incorrect version in 
> hadoop-hdfs-native-client/pom.xml 
> 
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9211:
--
Summary: Fix incorrect version in hadoop-hdfs-native-client/pom.xml from 
HDFS-9170  (was: branch-2 build broken by incorrect version in 
hadoop-hdfs-native-client/pom.xml )

> Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170
> -
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-8632:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, thx again Rakesh for the patch and Zhe for reviewing

> Add InterfaceAudience annotation to the erasure coding classes
> --
>
> Key: HDFS-8632
> URL: https://issues.apache.org/jira/browse/HDFS-8632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Fix For: 3.0.0
>
> Attachments: HDFS-8632-05-rebase.patch, HDFS-8632-05.patch, 
> HDFS-8632-HDFS-7285-00.patch, HDFS-8632-HDFS-7285-01.patch, 
> HDFS-8632-HDFS-7285-02.patch, HDFS-8632-HDFS-7285-03.patch, 
> HDFS-8632-HDFS-7285-04.patch
>
>
> I've noticed some of the erasure coding classes missing 
> {{@InterfaceAudience}} annotation. It would be good to identify the classes 
> and add proper annotation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947929#comment-14947929
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #468 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/468/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947925#comment-14947925
 ] 

Hudson commented on HDFS-9209:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2439 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2439/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9137) DeadLock between DataNode#refreshVolumes and BPOfferService#registrationSucceeded

2015-10-07 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947963#comment-14947963
 ] 

Yi Liu commented on HDFS-9137:
--

+1, thanks Uma, Colin and Vinay.

> DeadLock between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded 
> --
>
> Key: HDFS-9137
> URL: https://issues.apache.org/jira/browse/HDFS-9137
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.7.1
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-9137.00.patch, 
> HDFS-9137.01-WithPreservingRootExceptions.patch, HDFSS-9137.02.patch
>
>
> I can see this code flows between DataNode#refreshVolumes and 
> BPOfferService#registrationSucceeded could cause deadLock.
> In practice situation may be rare as user calling refreshVolumes at the time 
> DN registration with NN. But seems like issue can happen.
>  Reason for deadLock:
>   1) refreshVolumes will be called with DN lock and after at the end it will 
> also trigger Block report. In the Block report call, 
> BPServiceActor#triggerBlockReport calls toString on bpos. Here it takes 
> readLock on bpos.
>  DN lock then boos lock
> 2) BPOfferSetrvice#registrationSucceeded call is taking writeLock on bpos and 
>  calling dn.bpRegistrationSucceeded which is again synchronized call on DN.
> bpos lock and then DN lock.
> So, this can clearly create dead lock.
> I think simple fix could be to move triggerBlockReport call outside out DN 
> lock and I feel that call may not be really needed inside DN lock.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8941) DistributedFileSystem listCorruptFileBlocks API should resolve relative path

2015-10-07 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-8941:
---
Attachment: HDFS-8941-04.patch

> DistributedFileSystem listCorruptFileBlocks API should resolve relative path
> 
>
> Key: HDFS-8941
> URL: https://issues.apache.org/jira/browse/HDFS-8941
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-8941-00.patch, HDFS-8941-01.patch, 
> HDFS-8941-02.patch, HDFS-8941-03.patch, HDFS-8941-04.patch
>
>
> Presently {{DFS#listCorruptFileBlocks(path)}} API is not resolving the given 
> path relative to the workingDir. This jira is to discuss and provide the 
> implementation of the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9167) Update pom.xml in other modules to depend on hdfs-client instead of hdfs

2015-10-07 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9167:

Attachment: (was: HDFS-9167.000.patch)

> Update pom.xml in other modules to depend on hdfs-client instead of hdfs
> 
>
> Key: HDFS-9167
> URL: https://issues.apache.org/jira/browse/HDFS-9167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Mingliang Liu
>
> Since now the implementation of the client has been moved to the 
> hadoop-hdfs-client, we should update the poms of other modules in hadoop to 
> use hdfs-client instead of hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948063#comment-14948063
 ] 

Surendra Singh Lilhore commented on HDFS-9209:
--

Thanks [~jingzhao] for review and commit...
Thanks [~zhz] for review.

> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8632) Add InterfaceAudience annotation to the erasure coding classes

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948101#comment-14948101
 ] 

Hudson commented on HDFS-8632:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2407 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2407/])
HDFS-8632. Add InterfaceAudience annotation to the erasure coding (wang: rev 
66e2cfa1a0285f2b4f62a4ffb4d5c1ee54f76156)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicies.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/RSUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/ErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureDecoder.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/ErasureCodingWorker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlockGroup.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/RSErasureCodec.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureDecodingStep.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCodingStep.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/erasurecode/ECCli.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ErasureCodingPolicy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/codec/AbstractErasureCodec.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/grouper/BlockGrouper.java
* 

[jira] [Commented] (HDFS-8442) Remove ServerLifecycleListener from kms/server.xml.

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948100#comment-14948100
 ] 

Hadoop QA commented on HDFS-8442:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 44s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m  3s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 24s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | common tests |   1m 38s | Tests passed in 
hadoop-kms. |
| | |  38m 15s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12734324/HDFS-8442-1.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / 35affec |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12851/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-kms test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12851/artifact/patchprocess/testrun_hadoop-kms.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12851/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12851/console |


This message was automatically generated.

> Remove ServerLifecycleListener from kms/server.xml.
> ---
>
> Key: HDFS-8442
> URL: https://issues.apache.org/jira/browse/HDFS-8442
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: nijel
>Assignee: nijel
> Attachments: HDFS-8442-1.patch
>
>
> Remove ServerLifecycleListener from kms/server.xml.
> From tomcat Tomcat 7.0.9 the support for ServerLifecycleListener is removed
> ref : https://tomcat.apache.org/tomcat-7.0-doc/changelog.html
> Remove ServerLifecycleListener. This was already removed from server.xml and 
> with the Lifecycle re-factoring is no longer required. (markt)
> So if the build env is with tomcat later than this, kms startup is failing
> {code}
> SEVERE: Begin event threw exception
> java.lang.ClassNotFoundException: 
> org.apache.catalina.mbeans.ServerLifecycleListener
> {code}
> can we remove this listener ? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9209) Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948102#comment-14948102
 ] 

Hudson commented on HDFS-9209:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2407 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2407/])
HDFS-9209. Erasure coding: Add apache license header in (jing9: rev 
fde729feeb67af18f7d9b1cd156750ec9e8d3304)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java


> Erasure coding: Add apache license header in TestFileStatusWithECPolicy.java
> 
>
> Key: HDFS-9209
> URL: https://issues.apache.org/jira/browse/HDFS-9209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 3.0.0
>
> Attachments: HDFS-9209.patch
>
>
> Release audit warnings
> https://builds.apache.org/job/PreCommit-HDFS-Build/12834/artifact/patchprocess/patchReleaseAuditProblems.txt
> {noformat}
> !? 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileStatusWithECPolicy.java
> Lines that start with ? in the release audit  report indicate files that 
> do not have an Apache license header.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9176) TestDirectoryScanner#testThrottling often fails.

2015-10-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947826#comment-14947826
 ] 

Hudson commented on HDFS-9176:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1232 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1232/])
HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel (lei: 
rev 6dd47d754cb11297c8710a5c318c034abea7a836)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> TestDirectoryScanner#testThrottling often fails.
> 
>
> Key: HDFS-9176
> URL: https://issues.apache.org/jira/browse/HDFS-9176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Yi Liu
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 3.0.0, 2.8.0
>
> Attachments: HDFS-9176.001.patch, HDFS-9176.002.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/12736/testReport/
> https://builds.apache.org/job/PreCommit-HADOOP-Build/7732/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9204) DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947868#comment-14947868
 ] 

Hadoop QA commented on HDFS-9204:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 28s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  0s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 42s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 28s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 30s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 192m 48s | Tests passed in hadoop-hdfs. 
|
| | | 239m 33s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765448/HDFS-9204.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 99e5204 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12841/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12841/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12841/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12841/console |


This message was automatically generated.

> DatanodeDescriptor#PendingReplicationWithoutTargets is wrongly calculated
> -
>
> Key: HDFS-9204
> URL: https://issues.apache.org/jira/browse/HDFS-9204
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Mingliang Liu
> Attachments: HDFS-9204.000.patch, HDFS-9204.001.patch
>
>
> This seems to be a regression caused by the merge of EC feature branch. 
> {{DatanodeDescriptor#incrementPendingReplicationWithoutTargets}}, which is 
> added by HDFS-7128 to fix a bug during DN decommission, is no longer called 
> when creating ReplicationWork.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 branch-2 backport

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9211:
--
Summary: Fix incorrect version in hadoop-hdfs-native-client/pom.xml from 
HDFS-9170 branch-2 backport  (was: Fix incorrect version in 
hadoop-hdfs-native-client/pom.xml from HDFS-9170)

> Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 
> branch-2 backport
> ---
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9211) Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 branch-2 backport

2015-10-07 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9211:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to just branch-2, thanks Eric.

Any interest in fixing up the RAT issue too? It looks related to the 
hdfs-native-client refactor.

> Fix incorrect version in hadoop-hdfs-native-client/pom.xml from HDFS-9170 
> branch-2 backport
> ---
>
> Key: HDFS-9211
> URL: https://issues.apache.org/jira/browse/HDFS-9211
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.8.0
>
> Attachments: HDFS-9211-branch-2.001.patch
>
>
> When HDFS-9170 was backported to branch-2, the version in 
> hadoop-hdfs-project/hadoop-hdfs-native-client/pom.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >