[jira] [Commented] (HDFS-10342) BlockManager#createLocatedBlocks should not check corrupt replicas if none are corrupt

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343701#comment-15343701
 ] 

Hadoop QA commented on HDFS-10342:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 21s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812380/HDFS-10342.002.patch |
| JIRA Issue | HDFS-10342 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fd6ec635432a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d433b16 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15864/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15864/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15864/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15864/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> BlockManager#createLocatedBlocks should 

[jira] [Work started] (HDFS-10557) Fix handling of the -fs Generic option

2016-06-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-10557 started by Arpit Agarwal.

> Fix handling of the -fs Generic option
> --
>
> Key: HDFS-10557
> URL: https://issues.apache.org/jira/browse/HDFS-10557
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer
>Affects Versions: HDFS-1312
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> A recent change to DiskBalancer replaced the -uri option with -fs. However 
> -fs is a [generic 
> option|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#Generic_Options]
>  so it is consumed by the GenericOptionsParser.
> We can update this option handling to make it similar to other hdfs commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10557) Fix handling of the -fs Generic option

2016-06-21 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-10557:


 Summary: Fix handling of the -fs Generic option
 Key: HDFS-10557
 URL: https://issues.apache.org/jira/browse/HDFS-10557
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: diskbalancer
Affects Versions: HDFS-1312
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


A recent change to DiskBalancer replaced the -uri option with -fs. However -fs 
is a [generic 
option|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#Generic_Options]
 so it is consumed by the GenericOptionsParser.

We can update this option handling to make it similar to other hdfs commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10555) Unable to loadFSEdits due to a failure in readCachePoolInfo

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343691#comment-15343691
 ] 

Hadoop QA commented on HDFS-10555:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 17s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.balancer.TestBalancer |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812361/HDFS-10555-00.patch |
| JIRA Issue | HDFS-10555 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b149f3a34d5d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d433b16 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15862/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15862/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15862/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15862/console |
| Powered by | 

[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343680#comment-15343680
 ] 

Hadoop QA commented on HDFS-10473:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 19 
new + 81 unchanged - 0 fixed = 100 total (was 81) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 28s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 26s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812359/HDFS-10473-05.patch |
| JIRA Issue | HDFS-10473 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 074f4fbc854a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d433b16 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15863/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15863/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15863/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343673#comment-15343673
 ] 

Hadoop QA commented on HDFS-10460:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 6s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 13s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 102m 3s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812355/HDFS-10460-03.patch |
| JIRA Issue | HDFS-10460 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux c8e210882e08 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d433b16 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15861/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  

[jira] [Commented] (HDFS-10551) o.a.h.h.s.diskbalancer.command.Command does not actually verify options as expected.

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343655#comment-15343655
 ] 

Hadoop QA commented on HDFS-10551:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
33s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 16s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c88012f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812354/HDFS-10551-HDFS-1312.002.patch
 |
| JIRA Issue | HDFS-10551 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 53681db5860d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-1312 / 62e4dcd |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15860/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15860/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |

[jira] [Updated] (HDFS-10342) BlockManager#createLocatedBlocks should not check corrupt replicas if none are corrupt

2016-06-21 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HDFS-10342:
---
Attachment: HDFS-10342.002.patch

Updated patch for checkstyle issues. Test failures except TestDFSUpgradeWithHA 
are seen with and without the patch and seem to stem from EditLog failures. 
Investigating those and giving this patch another chance.

> BlockManager#createLocatedBlocks should not check corrupt replicas if none 
> are corrupt
> --
>
> Key: HDFS-10342
> URL: https://issues.apache.org/jira/browse/HDFS-10342
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Kuhu Shukla
> Attachments: HDFS-10342.001.patch, HDFS-10342.002.patch
>
>
> {{corruptReplicas#isReplicaCorrupt(block, node)}} is called for every node 
> while populating the machines array.  There's no need to invoke the method if 
> {{corruptReplicas#numCorruptReplicas(block)}} returned 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10555) Unable to loadFSEdits due to a failure in readCachePoolInfo

2016-06-21 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-10555:
---
Status: Patch Available  (was: Open)

> Unable to loadFSEdits due to a failure in readCachePoolInfo
> ---
>
> Key: HDFS-10555
> URL: https://issues.apache.org/jira/browse/HDFS-10555
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFS-10555-00.patch
>
>
> Recently some tests are failing and unable to loadFSEdits due to a failure in 
> readCachePoolInfo.
> Here in below code
> FSImageSerialization.java
> {code}
>   }
> if ((flags & ~0x2F) != 0) {
>   throw new IOException("Unknown flag in CachePoolInfo: " + flags);
> }
> {code}
> When all values of CachePool variable set to true, flags value & ~0x2F turns 
> out to non zero value. So, this condition failing due to the addition of 0x20 
>  and changing  value from ~0x1F to ~0x2F.
> May be to fix this issue, we may can change multiply value to ~0x3F 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10555) Unable to loadFSEdits due to a failure in readCachePoolInfo

2016-06-21 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-10555:
---
Attachment: HDFS-10555-00.patch

I have just attached a simple patch to fix this.

> Unable to loadFSEdits due to a failure in readCachePoolInfo
> ---
>
> Key: HDFS-10555
> URL: https://issues.apache.org/jira/browse/HDFS-10555
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Critical
> Attachments: HDFS-10555-00.patch
>
>
> Recently some tests are failing and unable to loadFSEdits due to a failure in 
> readCachePoolInfo.
> Here in below code
> FSImageSerialization.java
> {code}
>   }
> if ((flags & ~0x2F) != 0) {
>   throw new IOException("Unknown flag in CachePoolInfo: " + flags);
> }
> {code}
> When all values of CachePool variable set to true, flags value & ~0x2F turns 
> out to non zero value. So, this condition failing due to the addition of 0x20 
>  and changing  value from ~0x1F to ~0x2F.
> May be to fix this issue, we may can change multiply value to ~0x3F 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-21 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-10473:
---
Attachment: HDFS-10473-05.patch

Thanks a lot, [~jingzhao] for the quick review. Yup, as mostly we may set on 
high level directory, it make sense to reduce to debug level. I have just 
changed it. Please check.

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch, HDFS-10473-02.patch, 
> HDFS-10473-03.patch, HDFS-10473-04.patch, HDFS-10473-05.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-21 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10460:

Attachment: HDFS-10460-03.patch

Attached new patch fixing checkstyle warning.

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -
>
> Key: HDFS-10460
> URL: https://issues.apache.org/jira/browse/HDFS-10460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10460-00.patch, HDFS-10460-01.patch, 
> HDFS-10460-02.patch, HDFS-10460-03.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7597) DelegationTokenIdentifier should cache the TokenIdentifier to UGI mapping

2016-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343202#comment-15343202
 ] 

Hudson commented on HDFS-7597:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9998 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9998/])
HDFS-7597. DelegationTokenIdentifier should cache the TokenIdentifier to 
(aajisaka: rev d433b16ce6d74f1a44bc29446c74b1cb5f8a10fa)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/DataNodeUGIProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/TestDataNodeUGIProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenIdentifier.java


> DelegationTokenIdentifier should cache the TokenIdentifier to UGI mapping
> -
>
> Key: HDFS-7597
> URL: https://issues.apache.org/jira/browse/HDFS-7597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HDFS-7597.01.patch, HDFS-7597.patch, HDFS-7597.patch, 
> HDFS-7597.patch
>
>
> Webhdfs seeks involve closing the current connection, and reissuing a new 
> open request with the new offset.  The RPC layer caches connections so the DN 
> keeps a lingering connection open to the NN.  Connection caching is in part 
> based on UGI.  Although the client used the same token for the new offset 
> request, the UGI is different which forces the DN to open another unnecessary 
> connection to the NN.
> A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7597) DelegationTokenIdentifier should cache the TokenIdentifier to UGI mapping

2016-06-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-7597:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to branch-2.8 and above. Thanks all who contributed to this 
issue!!

> DelegationTokenIdentifier should cache the TokenIdentifier to UGI mapping
> -
>
> Key: HDFS-7597
> URL: https://issues.apache.org/jira/browse/HDFS-7597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HDFS-7597.01.patch, HDFS-7597.patch, HDFS-7597.patch, 
> HDFS-7597.patch
>
>
> Webhdfs seeks involve closing the current connection, and reissuing a new 
> open request with the new offset.  The RPC layer caches connections so the DN 
> keeps a lingering connection open to the NN.  Connection caching is in part 
> based on UGI.  Although the client used the same token for the new offset 
> request, the UGI is different which forces the DN to open another unnecessary 
> connection to the NN.
> A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10551) o.a.h.h.s.diskbalancer.command.Command does not actually verify options as expected.

2016-06-21 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10551:

Attachment: HDFS-10551-HDFS-1312.002.patch

Along with HDFS-10550, this patch fixes all Unit test failures.

> o.a.h.h.s.diskbalancer.command.Command does not actually verify options as 
> expected.
> 
>
> Key: HDFS-10551
> URL: https://issues.apache.org/jira/browse/HDFS-10551
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10551-HDFS-1312.001.patch, 
> HDFS-10551-HDFS-1312.002.patch
>
>
> In {{diskbalancer.command.Command#verifyCommandOptions}}. The following code 
> does not do what it expected to do:
> {code}
> if (!validArgs.containsKey(opt.getArgName())) {
> {code}
> opt.getArgName() always returns "arg" instead of i.e., {{report}} or {{uri}}, 
> which is the expected parameter to check.
> It should use {{opt.getLongOpt()}} to get the option names. It can pass on 
> the branch because {{opt.getArgName()}} always returns {{"arg"}}, which is 
> accidently in {{validArgs}}. However I don't think it is the intention for 
> this function.
> Additionally, in the following code
> {code}
> validArguments.append("Valid arguments are : %n");
> {code}
> This {{%n}} is not used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7597) DelegationTokenIdentifier should cache the TokenIdentifier to UGI mapping

2016-06-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-7597:

 Labels:   (was: BB2015-05-TBR)
Summary: DelegationTokenIdentifier should cache the TokenIdentifier to UGI 
mapping  (was: DNs should not open new NN connections when webhdfs clients seek)

> DelegationTokenIdentifier should cache the TokenIdentifier to UGI mapping
> -
>
> Key: HDFS-7597
> URL: https://issues.apache.org/jira/browse/HDFS-7597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-7597.01.patch, HDFS-7597.patch, HDFS-7597.patch, 
> HDFS-7597.patch
>
>
> Webhdfs seeks involve closing the current connection, and reissuing a new 
> open request with the new offset.  The RPC layer caches connections so the DN 
> keeps a lingering connection open to the NN.  Connection caching is in part 
> based on UGI.  Although the client used the same token for the new offset 
> request, the UGI is different which forces the DN to open another unnecessary 
> connection to the NN.
> A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile

2016-06-21 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343193#comment-15343193
 ] 

Kai Sasaki commented on HDFS-10534:
---

[~zhz] Since several configurations in FSNamesystem are {{final}} values, it is 
difficult to extract a method in one meaning because FSNameSystem constructor 
mainly does set final configurations. I think keeping it as it is can be an 
option, what do you think?

> NameNode WebUI should display DataNode usage rate with a certain percentile
> ---
>
> Key: HDFS-10534
> URL: https://issues.apache.org/jira/browse/HDFS-10534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Zhe Zhang
>Assignee: Kai Sasaki
> Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, 
> HDFS-10534.03.patch, HDFS-10534.04.patch
>
>
> In addition of *Min/Median/Max*, another meaningful metric for cluster 
> balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should 
> add a config option, and another filed on NN WebUI, to display this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343186#comment-15343186
 ] 

Jing Zhao commented on HDFS-10473:
--

Thanks for updating the patch, Uma. The latest patch looks pretty good to me. 
One minor: it may be better to change the following log level to DEBUG, 
considering the policy may be set on a high level directory and 
{{getStoragePolicyID}} is widely called. +1 once this is addressed.
{code}
508 if (isStriped() && id != BLOCK_STORAGE_POLICY_ID_UNSPECIFIED
509 && !ErasureCodingPolicyManager
510 .checkStoragePolicySuitableForECStripedMode(id)) {
511   id = HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED;
512   LOG.warn("The current effective storage policy id : " + id
513   + " is not suitable for striped mode EC file : " + getName()
514   + ". So, just returning unspecified storage policy id");
515 }
{code}

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch, HDFS-10473-02.patch, 
> HDFS-10473-03.patch, HDFS-10473-04.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile

2016-06-21 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343157#comment-15343157
 ] 

Kai Sasaki commented on HDFS-10534:
---

Though checkstyle seems to be broken before attaching this patch, I'll make new 
method to reduce a method lines.

> NameNode WebUI should display DataNode usage rate with a certain percentile
> ---
>
> Key: HDFS-10534
> URL: https://issues.apache.org/jira/browse/HDFS-10534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Zhe Zhang
>Assignee: Kai Sasaki
> Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, 
> HDFS-10534.03.patch, HDFS-10534.04.patch
>
>
> In addition of *Min/Median/Max*, another meaningful metric for cluster 
> balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should 
> add a config option, and another filed on NN WebUI, to display this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9343) Empty caller context considered invalid

2016-06-21 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9343:

Issue Type: Sub-task  (was: Task)
Parent: HDFS-9184

> Empty caller context considered invalid
> ---
>
> Key: HDFS-9343
> URL: https://issues.apache.org/jira/browse/HDFS-9343
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9343.000.patch, HDFS-9343.001.patch, 
> HDFS-9343.002.patch, HDFS-9343.003.patch, HDFS-9343.004.patch
>
>
> The caller context with empty context string is considered invalid, and it 
> should not appear in the audit log.
> Meanwhile, too long signature will not be written to audit log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10556) DistCpOptions should be validated automatically

2016-06-21 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-10556:


 Summary: DistCpOptions should be validated automatically
 Key: HDFS-10556
 URL: https://issues.apache.org/jira/browse/HDFS-10556
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


{{DistCpOptions}} can be set from command-line or may be set manually. In 
[HDFS-10397], we refactored the validation to make it simpler and more 
efficient. However, the newly added {{validate()}} method may not be 
automatically revoked. This is the major concern for existing downstreams that 
create the {{DistCpOptions}} manually instead of by parser, and have 
conflicting options.

This jira is to make the validation happen automatically. A simple fix is to 
use the approach that validates in individual setters. This is a fix for 
{{branch-2}}. As a long-term fix, in [HDFS-10533], we're making the 
{{DistCpOptions}} immutable so that it will be hard, if not impossible, to use 
it wrongly by downstream applications. However, that code will only go to 
{{trunk}} branch as it breaks backwards-compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-21 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343118#comment-15343118
 ] 

Uma Maheswara Rao G edited comment on HDFS-10473 at 6/22/16 12:46 AM:
--

Failures seems to be unrelated and I have just filed a JIRA for failures: 
HDFS-10555


was (Author: umamaheswararao):
I have just filed a JIRA for failures: HDFS-10555

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch, HDFS-10473-02.patch, 
> HDFS-10473-03.patch, HDFS-10473-04.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-21 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343118#comment-15343118
 ] 

Uma Maheswara Rao G commented on HDFS-10473:


I have just filed a JIRA for failures: HDFS-10555

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch, HDFS-10473-02.patch, 
> HDFS-10473-03.patch, HDFS-10473-04.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10555) Unable to loadFSEdits due to a failure in readCachePoolInfo

2016-06-21 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343114#comment-15343114
 ] 

Uma Maheswara Rao G commented on HDFS-10555:


{noformat}
testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/edits_001-094;
 failing over to edit log 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/edits_001-094
java.io.IOException: Unknown flag in CachePoolInfo: 63
at 
org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readCachePoolInfo(FSImageSerialization.java:687)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddCachePoolOp.readFields(FSEditLogOp.java:3974)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LengthPrefixedReader.decodeOp(FSEditLogOp.java:4747)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.readOp(FSEditLogOp.java:4607)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:202)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:249)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:189)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:196)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:149)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:837)
{nofromat}

> Unable to loadFSEdits due to a failure in readCachePoolInfo
> ---
>
> Key: HDFS-10555
> URL: https://issues.apache.org/jira/browse/HDFS-10555
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Critical
>
> Recently some tests are failing and unable to loadFSEdits due to a failure in 
> readCachePoolInfo.
> Here in below code
> FSImageSerialization.java
> {code}
>   }
> if ((flags & ~0x2F) != 0) {
>   throw new IOException("Unknown flag in CachePoolInfo: " + flags);
> }
> {code}
> When all values of CachePool variable set to true, flags value & ~0x2F turns 
> out to non zero value. So, this condition failing due to the addition of 0x20 
>  and changing  value from ~0x1F to ~0x2F.
> May be to fix this issue, we may can change multiply value to ~0x3F 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10555) Unable to loadFSEdits due to a failure in readCachePoolInfo

2016-06-21 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343114#comment-15343114
 ] 

Uma Maheswara Rao G edited comment on HDFS-10555 at 6/22/16 12:44 AM:
--

{noformat}
testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/edits_001-094;
 failing over to edit log 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/edits_001-094
java.io.IOException: Unknown flag in CachePoolInfo: 63
at 
org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readCachePoolInfo(FSImageSerialization.java:687)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddCachePoolOp.readFields(FSEditLogOp.java:3974)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LengthPrefixedReader.decodeOp(FSEditLogOp.java:4747)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.readOp(FSEditLogOp.java:4607)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:202)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:249)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:189)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:196)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:149)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:837)

{noformat}


was (Author: umamaheswararao):
{noformat}
testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/edits_001-094;
 failing over to edit log 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/edits_001-094
java.io.IOException: Unknown flag in CachePoolInfo: 63
at 
org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readCachePoolInfo(FSImageSerialization.java:687)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddCachePoolOp.readFields(FSEditLogOp.java:3974)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LengthPrefixedReader.decodeOp(FSEditLogOp.java:4747)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.readOp(FSEditLogOp.java:4607)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:202)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:249)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:189)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:196)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:149)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:837)
{nofromat}

> Unable to loadFSEdits due to a failure in readCachePoolInfo
> ---
>
> Key: HDFS-10555
> URL: https://issues.apache.org/jira/browse/HDFS-10555
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Critical
>
> Recently some tests are failing and unable to loadFSEdits due to a failure in 
> readCachePoolInfo.
> Here in below code
> FSImageSerialization.java
> {code}
>   }
> if ((flags & ~0x2F) != 0) {
>   throw new IOException("Unknown flag in CachePoolInfo: " + flags);
> }
> {code}
> When all values of CachePool variable set to true, flags value & ~0x2F turns 
> out to non zero value. So, this condition failing due to the addition of 0x20 
>  and changing  value from ~0x1F to ~0x2F.
> May be to fix this issue, we may can change multiply value to ~0x3F 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10555) Unable to loadFSEdits due to a failure in readCachePoolInfo

2016-06-21 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-10555:
--

 Summary: Unable to loadFSEdits due to a failure in 
readCachePoolInfo
 Key: HDFS-10555
 URL: https://issues.apache.org/jira/browse/HDFS-10555
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
Priority: Critical


Recently some tests are failing and unable to loadFSEdits due to a failure in 
readCachePoolInfo.
Here in below code
FSImageSerialization.java
{code}
  }
if ((flags & ~0x2F) != 0) {
  throw new IOException("Unknown flag in CachePoolInfo: " + flags);
}
{code}

When all values of CachePool variable set to true, flags value & ~0x2F turns 
out to non zero value. So, this condition failing due to the addition of 0x20  
and changing  value from ~0x1F to ~0x2F.
May be to fix this issue, we may can change multiply value to ~0x3F 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10550) DiskBalancer: fix issue of order dependency in iteration in ReportCommand test

2016-06-21 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10550:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~xiaobingo] Thanks for the contribution. [~eddyxu] Thank you for the code 
reviews. I have committed this to the feature branch.

> DiskBalancer: fix issue of order dependency in iteration in ReportCommand test
> --
>
> Key: HDFS-10550
> URL: https://issues.apache.org/jira/browse/HDFS-10550
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10550-HDFS-1312.000.patch, 
> HDFS-10550-HDFS-1312.001.patch, HDFS-10550-HDFS-1312.002.patch
>
>
> TestDiskBalancerCommand#testReportNode assumed order of result entries 
> returned, however, DiskBalancerDataNode#volumeSets and 
> DiskBalancerVolumeSet#volumes have no guarantee on the order since they are 
> initialized as HashMap. The test should be fixed by not assuming the order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343109#comment-15343109
 ] 

Hadoop QA commented on HDFS-10544:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 
new + 135 unchanged - 0 fixed = 137 total (was 135) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 9s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 18s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812321/HDFS-10544.03.patch |
| JIRA Issue | HDFS-10544 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f8f3003d51e2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d8107fc |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15859/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15859/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15859/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-10550) DiskBalancer: fix issue of order dependency in iteration in ReportCommand test

2016-06-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343108#comment-15343108
 ] 

Anu Engineer commented on HDFS-10550:
-

+1, for the v2 patch. I will commit this shortly.

> DiskBalancer: fix issue of order dependency in iteration in ReportCommand test
> --
>
> Key: HDFS-10550
> URL: https://issues.apache.org/jira/browse/HDFS-10550
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10550-HDFS-1312.000.patch, 
> HDFS-10550-HDFS-1312.001.patch, HDFS-10550-HDFS-1312.002.patch
>
>
> TestDiskBalancerCommand#testReportNode assumed order of result entries 
> returned, however, DiskBalancerDataNode#volumeSets and 
> DiskBalancerVolumeSet#volumes have no guarantee on the order since they are 
> initialized as HashMap. The test should be fixed by not assuming the order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10551) o.a.h.h.s.diskbalancer.command.Command does not actually verify options as expected.

2016-06-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342936#comment-15342936
 ] 

Lei (Eddy) Xu edited comment on HDFS-10551 at 6/21/16 11:22 PM:


Ok, please upload the merged diff patch to this JIRA and let it pass the 
jenkins.

Thanks a lot



was (Author: eddyxu):
Ok, please do so.

> o.a.h.h.s.diskbalancer.command.Command does not actually verify options as 
> expected.
> 
>
> Key: HDFS-10551
> URL: https://issues.apache.org/jira/browse/HDFS-10551
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10551-HDFS-1312.001.patch
>
>
> In {{diskbalancer.command.Command#verifyCommandOptions}}. The following code 
> does not do what it expected to do:
> {code}
> if (!validArgs.containsKey(opt.getArgName())) {
> {code}
> opt.getArgName() always returns "arg" instead of i.e., {{report}} or {{uri}}, 
> which is the expected parameter to check.
> It should use {{opt.getLongOpt()}} to get the option names. It can pass on 
> the branch because {{opt.getArgName()}} always returns {{"arg"}}, which is 
> accidently in {{validArgs}}. However I don't think it is the intention for 
> this function.
> Additionally, in the following code
> {code}
> validArguments.append("Valid arguments are : %n");
> {code}
> This {{%n}} is not used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15343006#comment-15343006
 ] 

Hadoop QA commented on HDFS-10552:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 14s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c88012f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812309/HDFS-10552-HDFS-1312.004.patch
 |
| JIRA Issue | HDFS-10552 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 692a60398e2c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-1312 / 3a0a329 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15858/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15858/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15858/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15858/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
>   

[jira] [Commented] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342988#comment-15342988
 ] 

Xiaoyu Yao commented on HDFS-10469:
---

Thanks [~hanishakoneru] for updating the patch. The V4 patch looks to me and 
unit test failures don't seem relate to this patch. 
+1 and I will rerun failed tests and commit it if everything pass locally. 

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch, 
> HDFS-10469.002.patch, HDFS-10469.003.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10469) Add number of active xceivers to datanode metrics

2016-06-21 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342988#comment-15342988
 ] 

Xiaoyu Yao edited comment on HDFS-10469 at 6/21/16 11:02 PM:
-

Thanks [~hanishakoneru] for updating the patch. The V3 patch looks to me and 
unit test failures don't seem relate to this patch. 
+1 and I will rerun failed tests and commit it if everything pass locally. 


was (Author: xyao):
Thanks [~hanishakoneru] for updating the patch. The V4 patch looks to me and 
unit test failures don't seem relate to this patch. 
+1 and I will rerun failed tests and commit it if everything pass locally. 

> Add number of active xceivers to datanode metrics
> -
>
> Key: HDFS-10469
> URL: https://issues.apache.org/jira/browse/HDFS-10469
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-10469.000.patch, HDFS-10469.001.patch, 
> HDFS-10469.002.patch, HDFS-10469.003.patch
>
>
> Number of active xceivers is exposed via jmx, but not in Datanode metrics. We 
> should add it to datanode metrics for monitoring the load on Datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10551) o.a.h.h.s.diskbalancer.command.Command does not actually verify options as expected.

2016-06-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342936#comment-15342936
 ] 

Lei (Eddy) Xu commented on HDFS-10551:
--

Ok, please do so.

> o.a.h.h.s.diskbalancer.command.Command does not actually verify options as 
> expected.
> 
>
> Key: HDFS-10551
> URL: https://issues.apache.org/jira/browse/HDFS-10551
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10551-HDFS-1312.001.patch
>
>
> In {{diskbalancer.command.Command#verifyCommandOptions}}. The following code 
> does not do what it expected to do:
> {code}
> if (!validArgs.containsKey(opt.getArgName())) {
> {code}
> opt.getArgName() always returns "arg" instead of i.e., {{report}} or {{uri}}, 
> which is the expected parameter to check.
> It should use {{opt.getLongOpt()}} to get the option names. It can pass on 
> the branch because {{opt.getArgName()}} always returns {{"arg"}}, which is 
> accidently in {{validArgs}}. However I don't think it is the intention for 
> this function.
> Additionally, in the following code
> {code}
> validArguments.append("Valid arguments are : %n");
> {code}
> This {{%n}} is not used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10551) o.a.h.h.s.diskbalancer.command.Command does not actually verify options as expected.

2016-06-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342922#comment-15342922
 ] 

Anu Engineer commented on HDFS-10551:
-

[~eddyxu] If you don't have any issues with code, I will commit HDFS-10552, 
HDFS-10551 and HDFS-10550. I just verified this in a private branch. I will 
have to rebase the HDFS-10550 since these 3 patches have conflicting changes in 
{{TestDiskBalancerCommand.java}}. Then I will create a diff patch against 
trunk, which should make it easy to port DiskBalancer to older branches.

> o.a.h.h.s.diskbalancer.command.Command does not actually verify options as 
> expected.
> 
>
> Key: HDFS-10551
> URL: https://issues.apache.org/jira/browse/HDFS-10551
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10551-HDFS-1312.001.patch
>
>
> In {{diskbalancer.command.Command#verifyCommandOptions}}. The following code 
> does not do what it expected to do:
> {code}
> if (!validArgs.containsKey(opt.getArgName())) {
> {code}
> opt.getArgName() always returns "arg" instead of i.e., {{report}} or {{uri}}, 
> which is the expected parameter to check.
> It should use {{opt.getLongOpt()}} to get the option names. It can pass on 
> the branch because {{opt.getArgName()}} always returns {{"arg"}}, which is 
> accidently in {{validArgs}}. However I don't think it is the intention for 
> this function.
> Additionally, in the following code
> {code}
> validArguments.append("Valid arguments are : %n");
> {code}
> This {{%n}} is not used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10544) Balancer doesn't work with IPFailoverProxyProvider

2016-06-21 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10544:
-
Attachment: HDFS-10544.03.patch

Thought more about the two options. I think we should use the simpler approach 
of throwing {{IllegalArgumentException}} before checking if the proxy provider 
works with logical Uris. This is because no matter if the config uses 
{{ConfiguredFailoverProxyProvider}} or {{IPFailoverProxyProvider}}, a legal URI 
is needed by the constructor. So it's actually a valid requirement that 
{{nsId}} can form a legal URI.

Also updating the unit test {{TestDFSUtil}}. Basically, 2 of the tests were 
using logical URIs without specifying a {{FailoverProxyProvider}}. With such a 
config, the correct behavior of {{DFSUtil#getInternalNameServices}} is to try 
to resolve the logical URI, instead of waiting for the proxy provider to do so. 
But in those 2 tests, the config doesn't have the correct 
{{dfs.namenode.servicerpc-address}} entries to resolve the logical URIs.

> Balancer doesn't work with IPFailoverProxyProvider
> --
>
> Key: HDFS-10544
> URL: https://issues.apache.org/jira/browse/HDFS-10544
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-10544.00.patch, HDFS-10544.01.patch, 
> HDFS-10544.02.patch, HDFS-10544.03.patch
>
>
> Right now {{Balancer}} gets the NN URIs through 
> {{DFSUtil#getNameServiceUris}}, which returns logical URIs in HA is enabled. 
> If {{IPFailoverProxyProvider}} is used, {{Balancer}} will not be able to 
> start.
> I think the bug is at {{DFSUtil#getNameServiceUris}}:
> {code}
> for (String nsId : getNameServiceIds(conf)) {
>   if (HAUtil.isHAEnabled(conf, nsId)) {
> // Add the logical URI of the nameservice.
> try {
>   ret.add(new URI(HdfsConstants.HDFS_URI_SCHEME + "://" + nsId));
> {code}
> Then {{if}} clause should also consider if the {{FailoverProxyProvider}} has 
> {{useLogicalURI}} enabled. If not, {{getNameServiceUris}} should try to 
> resolve the physical URI for this nsId.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10550) DiskBalancer: fix issue of order dependency in iteration in ReportCommand test

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342897#comment-15342897
 ] 

Hadoop QA commented on HDFS-10550:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
34s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 58s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c88012f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812296/HDFS-10550-HDFS-1312.002.patch
 |
| JIRA Issue | HDFS-10550 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f405bc9c9713 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-1312 / 3a0a329 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15857/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15857/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15857/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15857/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer: fix issue of order dependency in iteration in ReportCommand test
> --
>
> Key: HDFS-10550
> URL: 

[jira] [Commented] (HDFS-10551) o.a.h.h.s.diskbalancer.command.Command does not actually verify options as expected.

2016-06-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342889#comment-15342889
 ] 

Anu Engineer commented on HDFS-10551:
-

that is correct the fix for that failure is part of HDFS-10550, so that failure 
is not related to this patch. As soon as we commit that patch, that failure 
will go. Please see the jenkins run on that JIRA.

> o.a.h.h.s.diskbalancer.command.Command does not actually verify options as 
> expected.
> 
>
> Key: HDFS-10551
> URL: https://issues.apache.org/jira/browse/HDFS-10551
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10551-HDFS-1312.001.patch
>
>
> In {{diskbalancer.command.Command#verifyCommandOptions}}. The following code 
> does not do what it expected to do:
> {code}
> if (!validArgs.containsKey(opt.getArgName())) {
> {code}
> opt.getArgName() always returns "arg" instead of i.e., {{report}} or {{uri}}, 
> which is the expected parameter to check.
> It should use {{opt.getLongOpt()}} to get the option names. It can pass on 
> the branch because {{opt.getArgName()}} always returns {{"arg"}}, which is 
> accidently in {{validArgs}}. However I don't think it is the intention for 
> this function.
> Additionally, in the following code
> {code}
> validArguments.append("Valid arguments are : %n");
> {code}
> This {{%n}} is not used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10551) o.a.h.h.s.diskbalancer.command.Command does not actually verify options as expected.

2016-06-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342880#comment-15342880
 ] 

Lei (Eddy) Xu edited comment on HDFS-10551 at 6/21/16 10:20 PM:


Hi, [~anu]

It seems that TestDiskBalancerCommand fails on jenkins here.


was (Author: eddyxu):
Hi, [~anu]

It seems that TestDiskBalancerCommand fails on trunk here.

> o.a.h.h.s.diskbalancer.command.Command does not actually verify options as 
> expected.
> 
>
> Key: HDFS-10551
> URL: https://issues.apache.org/jira/browse/HDFS-10551
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10551-HDFS-1312.001.patch
>
>
> In {{diskbalancer.command.Command#verifyCommandOptions}}. The following code 
> does not do what it expected to do:
> {code}
> if (!validArgs.containsKey(opt.getArgName())) {
> {code}
> opt.getArgName() always returns "arg" instead of i.e., {{report}} or {{uri}}, 
> which is the expected parameter to check.
> It should use {{opt.getLongOpt()}} to get the option names. It can pass on 
> the branch because {{opt.getArgName()}} always returns {{"arg"}}, which is 
> accidently in {{validArgs}}. However I don't think it is the intention for 
> this function.
> Additionally, in the following code
> {code}
> validArguments.append("Valid arguments are : %n");
> {code}
> This {{%n}} is not used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10551) o.a.h.h.s.diskbalancer.command.Command does not actually verify options as expected.

2016-06-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342880#comment-15342880
 ] 

Lei (Eddy) Xu commented on HDFS-10551:
--

Hi, [~anu]

It seems that TestDiskBalancerCommand fails on trunk here.

> o.a.h.h.s.diskbalancer.command.Command does not actually verify options as 
> expected.
> 
>
> Key: HDFS-10551
> URL: https://issues.apache.org/jira/browse/HDFS-10551
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10551-HDFS-1312.001.patch
>
>
> In {{diskbalancer.command.Command#verifyCommandOptions}}. The following code 
> does not do what it expected to do:
> {code}
> if (!validArgs.containsKey(opt.getArgName())) {
> {code}
> opt.getArgName() always returns "arg" instead of i.e., {{report}} or {{uri}}, 
> which is the expected parameter to check.
> It should use {{opt.getLongOpt()}} to get the option names. It can pass on 
> the branch because {{opt.getArgName()}} always returns {{"arg"}}, which is 
> accidently in {{validArgs}}. However I don't think it is the intention for 
> this function.
> Additionally, in the following code
> {code}
> validArguments.append("Valid arguments are : %n");
> {code}
> This {{%n}} is not used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342873#comment-15342873
 ] 

Anu Engineer commented on HDFS-10552:
-

I just applied HDFS-10552 followed by HDFS-10551 on a branch that is based on 
HDFS-1312. It worked, Can you please let me know what is the failure you are 
seeing ? 

> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch, HDFS-10552-HDFS-1312.003.patch, 
> HDFS-10552-HDFS-1312.004.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342863#comment-15342863
 ] 

Anu Engineer commented on HDFS-10552:
-

https://issues.apache.org/jira/browse/HDFS-10551 it seems to have passed on 
HDFS-1312. Let me commit all these patches and I will post a master patch 
against the trunk. 

> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch, HDFS-10552-HDFS-1312.003.patch, 
> HDFS-10552-HDFS-1312.004.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342853#comment-15342853
 ] 

Lei (Eddy) Xu commented on HDFS-10552:
--

Looks good. Thanks.

However, HDFS-10551 has failed on trunk and my local branch. 

> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch, HDFS-10552-HDFS-1312.003.patch, 
> HDFS-10552-HDFS-1312.004.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342843#comment-15342843
 ] 

Anu Engineer commented on HDFS-10552:
-

Results of testing on Physical cluster with patch 4. 
{noformat}
[hdfs@y124 aengineer]$ hdfs diskbalancer -query `hostname`
16/06/21 22:01:09 INFO command.Command: Executing "query plan" command.
16/06/21 22:01:09 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Plan ID:
Result: NO_PLAN
{noformat}

> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch, HDFS-10552-HDFS-1312.003.patch, 
> HDFS-10552-HDFS-1312.004.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10342) BlockManager#createLocatedBlocks should not check corrupt replicas if none are corrupt

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342822#comment-15342822
 ] 

Hadoop QA commented on HDFS-10342:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 9 
new + 184 unchanged - 0 fixed = 193 total (was 184) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 24s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812286/HDFS-10342.001.patch |
| JIRA Issue | HDFS-10342 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9fb43497bdcc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b2c596c |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15856/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15856/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15856/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15856/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Created] (HDFS-10554) libhdfs++: signed to unsigned conversions are breaking things and compiler isn't issuing expected warnings

2016-06-21 Thread James Clampffer (JIRA)
James Clampffer created HDFS-10554:
--

 Summary: libhdfs++: signed to unsigned conversions are breaking 
things and compiler isn't issuing expected warnings
 Key: HDFS-10554
 URL: https://issues.apache.org/jira/browse/HDFS-10554
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer


There's at least two places where we use -1 to indicate unset/default values 
that end up getting cast into unsigned integers.  The compiler should be smart 
enough to figure this out and issue a warning but it's not; we need to find out 
what's going on there.  We also need to fix the places where this sort of thing 
has found its way into the code:

In URI
{code}
  // -1 if the port is undefined.
  optional get_port() const
  { return port; }
{code}

In Options (gets cast to uint64_t somewhere)
{code}
/**
 * Maximum number of retries for RPC operations
 **/
int max_rpc_retries;
static const int kNoRetry = -1;
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10552:

Attachment: HDFS-10552-HDFS-1312.004.patch

Patch 4 removes the re-ordering of imports

> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch, HDFS-10552-HDFS-1312.003.patch, 
> HDFS-10552-HDFS-1312.004.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342691#comment-15342691
 ] 

Anu Engineer commented on HDFS-10552:
-

[~eddyxu] Since this is a completely new file, this change should apply to 
older branches without any conflicts. Can you please try to apply this patch to 
your branch ? if you have not edited the {{TestDiskBalancerCommand.java}} on 
your branch, I do not see where the conflict can come from. You are right we 
certainly want each patch to be focused, but if you patch has 20 lines in 
total, does it matter ? 

On second thought, I see where you are coming from. Have you applied the 
HDFS-10551 patch ? That change can cause a conflict. Makes sense without these 
five lines both these patches will apply on top of each other. 

If that is so, I will revert these fives lines and make both patch work 
together. Sorry I missed that in your earlier subtle comment. I will post patch 
4 soon.



> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch, HDFS-10552-HDFS-1312.003.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10441) libhdfs++: HA namenode support

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342686#comment-15342686
 ] 

Hadoop QA commented on HDFS-10441:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
27s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 22s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 24s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 23s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 35s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 3s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed CTEST tests | 
test_libhdfs_threaded_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812278/HDFS-10441.HDFS-8707.005.patch
 |
| JIRA Issue | HDFS-10441 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux e495f055b6a4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 71af408 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15855/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91.txt
 |
| CTEST logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15855/artifact/patchprocess/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91.ctest
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15855/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91.txt
 |
| JDK v1.7.0_101  Test Results | 

[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342658#comment-15342658
 ] 

Lei (Eddy) Xu commented on HDFS-10552:
--

bq. Did you mean do not ?

Yes. I mean do not. Sorry for the typo.

bq. Since this is in the test file, I am not able to see the downside of 
cleaning up five lines.

It is just a general good practice to make the patch as small / clean as 
possible. It just makes it less error prone to review the patch, because less 
to read, and easier to  backport to branch-2, 2.7, 2.8 and etc. 

Much appreciated to consider it. Thanks.

> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch, HDFS-10552-HDFS-1312.003.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7959) WebHdfs logging is missing on Datanode

2016-06-21 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342650#comment-15342650
 ] 

Sangjin Lee commented on HDFS-7959:
---

Thanks for updating the patch [~kihwal]! Just one more nit: can we use 
{{INTERNAL_SERVER_ERROR}} instead of hard-coded numeral 500?

> WebHdfs logging is missing on Datanode
> --
>
> Key: HDFS-7959
> URL: https://issues.apache.org/jira/browse/HDFS-7959
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7959.1.branch-2.patch, HDFS-7959.1.trunk.patch, 
> HDFS-7959.2.branch-2.patch, HDFS-7959.2.trunk.patch, 
> HDFS-7959.branch-2.patch, HDFS-7959.patch, HDFS-7959.patch, HDFS-7959.patch, 
> HDFS-7959.trunk.patch
>
>
> After the conversion to netty, webhdfs requests are not logged on datanodes. 
> The existing jetty log only logs the non-webhdfs requests that come through 
> the internal proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10543) hdfsRead read stops at block boundary

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342649#comment-15342649
 ] 

Hadoop QA commented on HDFS-10543:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
8s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 47s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 6m 1s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91 with JDK 
v1.8.0_91 generated 1 new + 29 unchanged - 0 fixed = 30 total (was 29) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 5m 59s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_101 with JDK 
v1.7.0_101 generated 1 new + 29 unchanged - 0 fixed = 30 total (was 29) {color} 
|
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 40s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 21s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 28s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812277/HDFS-10543.HDFS-8707.003.patch
 |
| JIRA Issue | HDFS-10543 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux cbde2bd8e0c0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 71af408 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15854/artifact/patchprocess/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15854/artifact/patchprocess/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_101.txt
 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15854/testReport/ 

[jira] [Commented] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342646#comment-15342646
 ] 

Hadoop QA commented on HDFS-10473:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 19 
new + 81 unchanged - 0 fixed = 100 total (was 81) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 0s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 45s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812274/HDFS-10473-04.patch |
| JIRA Issue | HDFS-10473 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1c24042ab2a1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b2c596c |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15853/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15853/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15853/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15853/testReport/ |
| modules | C: 

[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342632#comment-15342632
 ] 

Anu Engineer commented on HDFS-10552:
-

bq. Please do move these lines around. Lets keep the patch small and focus on 
the fixes?

Did you mean do *not*  ? 

The patch itself is couple of lines,(less than 10). So I took the opportunity 
to clean up before the merge.  Since this is in the test file, I am not able to 
see the downside of cleaning up five lines. If this is causing in an issue for 
you when backporting or something else let me know and I will revert changes to 
these lines.

I did test this with my cluster with hand and was able to see a NO_PLAN output 
yesterday. I will rebuild with the latest patch and post the output here.


> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch, HDFS-10552-HDFS-1312.003.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10511) libhdfs++: make error returning mechanism consistent across all hdfs operations

2016-06-21 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10511:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> libhdfs++: make error returning mechanism consistent across all hdfs 
> operations
> ---
>
> Key: HDFS-10511
> URL: https://issues.apache.org/jira/browse/HDFS-10511
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10511.HDFS-8707.000.patch, 
> HDFS-10511.HDFS-8707.000.patch, HDFS-10511.HDFS-8707.001.patch, 
> HDFS-10511.HDFS-8707.002.patch, HDFS-10511.HDFS-8707.003.patch
>
>
> Errno should always be set.
> If function is returning a code on stack, it should be consistent with errno.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10515) libhdfs++: Implement mkdirs, rmdir, rename, and remove

2016-06-21 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10515:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> libhdfs++: Implement mkdirs, rmdir, rename, and remove
> --
>
> Key: HDFS-10515
> URL: https://issues.apache.org/jira/browse/HDFS-10515
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10515.HDFS-8707.000.patch, 
> HDFS-10515.HDFS-8707.001.patch, HDFS-10515.HDFS-8707.002.patch, 
> HDFS-10515.HDFS-8707.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10524) libhdfs++: Implement chmod and chown

2016-06-21 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10524:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> libhdfs++: Implement chmod and chown
> 
>
> Key: HDFS-10524
> URL: https://issues.apache.org/jira/browse/HDFS-10524
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Anatoli Shein
>Assignee: Anatoli Shein
> Attachments: HDFS-10524.HDFS-8707.000.patch, 
> HDFS-10524.HDFS-8707.001.patch, HDFS-10524.HDFS-8707.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10550) DiskBalancer: fix issue of order dependency in iteration in ReportCommand test

2016-06-21 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342628#comment-15342628
 ] 

Xiaobing Zhou commented on HDFS-10550:
--

[~eddyxu] thank you for comments, I posted v002 patch. It removed the duplicate.

> DiskBalancer: fix issue of order dependency in iteration in ReportCommand test
> --
>
> Key: HDFS-10550
> URL: https://issues.apache.org/jira/browse/HDFS-10550
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10550-HDFS-1312.000.patch, 
> HDFS-10550-HDFS-1312.001.patch, HDFS-10550-HDFS-1312.002.patch
>
>
> TestDiskBalancerCommand#testReportNode assumed order of result entries 
> returned, however, DiskBalancerDataNode#volumeSets and 
> DiskBalancerVolumeSet#volumes have no guarantee on the order since they are 
> initialized as HashMap. The test should be fixed by not assuming the order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10550) DiskBalancer: fix issue of order dependency in iteration in ReportCommand test

2016-06-21 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10550:
-
Attachment: HDFS-10550-HDFS-1312.002.patch

> DiskBalancer: fix issue of order dependency in iteration in ReportCommand test
> --
>
> Key: HDFS-10550
> URL: https://issues.apache.org/jira/browse/HDFS-10550
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10550-HDFS-1312.000.patch, 
> HDFS-10550-HDFS-1312.001.patch, HDFS-10550-HDFS-1312.002.patch
>
>
> TestDiskBalancerCommand#testReportNode assumed order of result entries 
> returned, however, DiskBalancerDataNode#volumeSets and 
> DiskBalancerVolumeSet#volumes have no guarantee on the order since they are 
> initialized as HashMap. The test should be fixed by not assuming the order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342625#comment-15342625
 ] 

Hadoop QA commented on HDFS-10460:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-hdfs-project: The patch generated 1 new + 115 
unchanged - 0 fixed = 116 total (was 115) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 7s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812267/HDFS-10460-02.patch |
| JIRA Issue | HDFS-10460 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 95690f97b567 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b2c596c |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Comment Edited] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342604#comment-15342604
 ] 

Lei (Eddy) Xu edited comment on HDFS-10552 at 6/21/16 8:32 PM:
---

Ok, thanks a lot, [~anu]

Just one small issue here: 
{code}
import static org.hamcrest.CoreMatchers.allOf;
import static org.hamcrest.CoreMatchers.containsString;
import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertThat;
{code}

Please do move these lines around. Lets keep the patch small and focus on the 
fixes?

Btw, can you also check the stdout / stderr is what you expected from the CLI? 
Thanks!


was (Author: eddyxu):
Ok, thanks a lot, [~anu]

Just one small issue here: 
{code}
import static org.hamcrest.CoreMatchers.allOf;
import static org.hamcrest.CoreMatchers.containsString;
import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertThat;
{code}

Please do move these lines around. Lets keep the patch small and focus on the 
fixes?

> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch, HDFS-10552-HDFS-1312.003.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342610#comment-15342610
 ] 

Hadoop QA commented on HDFS-10552:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
1s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 49s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c88012f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812271/HDFS-10552-HDFS-1312.003.patch
 |
| JIRA Issue | HDFS-10552 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e70021015c0b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-1312 / 3a0a329 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15851/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15851/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15851/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15851/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer "-query" results in NPE if no plan for the node
> 

[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342604#comment-15342604
 ] 

Lei (Eddy) Xu commented on HDFS-10552:
--

Ok, thanks a lot, [~anu]

Just one small issue here: 
{code}
import static org.hamcrest.CoreMatchers.allOf;
import static org.hamcrest.CoreMatchers.containsString;
import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertThat;
{code}

Please do move these lines around. Lets keep the patch small and focus on the 
fixes?

> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch, HDFS-10552-HDFS-1312.003.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10550) DiskBalancer: fix issue of order dependency in iteration in ReportCommand test

2016-06-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342598#comment-15342598
 ] 

Lei (Eddy) Xu commented on HDFS-10550:
--

One small issue

{code}
  containsString("DISK"),
  containsString("/tmp/disk/KmHefYNURo"),
{code}

the {{containsString("DISK")}} is duplicated from the previous line.

+1 pending the fix.

> DiskBalancer: fix issue of order dependency in iteration in ReportCommand test
> --
>
> Key: HDFS-10550
> URL: https://issues.apache.org/jira/browse/HDFS-10550
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10550-HDFS-1312.000.patch, 
> HDFS-10550-HDFS-1312.001.patch
>
>
> TestDiskBalancerCommand#testReportNode assumed order of result entries 
> returned, however, DiskBalancerDataNode#volumeSets and 
> DiskBalancerVolumeSet#volumes have no guarantee on the order since they are 
> initialized as HashMap. The test should be fixed by not assuming the order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10342) BlockManager#createLocatedBlocks should not check corrupt replicas if none are corrupt

2016-06-21 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HDFS-10342:
---
Status: Patch Available  (was: Open)

> BlockManager#createLocatedBlocks should not check corrupt replicas if none 
> are corrupt
> --
>
> Key: HDFS-10342
> URL: https://issues.apache.org/jira/browse/HDFS-10342
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Kuhu Shukla
> Attachments: HDFS-10342.001.patch
>
>
> {{corruptReplicas#isReplicaCorrupt(block, node)}} is called for every node 
> while populating the machines array.  There's no need to invoke the method if 
> {{corruptReplicas#numCorruptReplicas(block)}} returned 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10342) BlockManager#createLocatedBlocks should not check corrupt replicas if none are corrupt

2016-06-21 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HDFS-10342:
---
Attachment: HDFS-10342.001.patch

The patch adds a check to ensure that {{isReplicaCorrupt}}  is called only when 
{{numCorruptReplicas}} is non-zero.

> BlockManager#createLocatedBlocks should not check corrupt replicas if none 
> are corrupt
> --
>
> Key: HDFS-10342
> URL: https://issues.apache.org/jira/browse/HDFS-10342
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Kuhu Shukla
> Attachments: HDFS-10342.001.patch
>
>
> {{corruptReplicas#isReplicaCorrupt(block, node)}} is called for every node 
> while populating the machines array.  There's no need to invoke the method if 
> {{corruptReplicas#numCorruptReplicas(block)}} returned 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10550) DiskBalancer: fix issue of order dependency in iteration in ReportCommand test

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342581#comment-15342581
 ] 

Hadoop QA commented on HDFS-10550:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
20s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} HDFS-1312 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 9s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 21s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeMXBean |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestDecommissionWithStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:c88012f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812012/HDFS-10550-HDFS-1312.001.patch
 |
| JIRA Issue | HDFS-10550 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e6c9ef027c1f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-1312 / 3a0a329 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15849/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15849/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15849/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15849/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> DiskBalancer: fix issue of order dependency in iteration in ReportCommand test
> 

[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2016-06-21 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342578#comment-15342578
 ] 

Zhe Zhang commented on HDFS-7859:
-

Thanks!

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Xinwei Qin 
>  Labels: BB2015-05-TBR, hdfs-ec-3.0-must-do
> Attachments: HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch, 
> HDFS-7859.001.patch, HDFS-7859.002.patch, HDFS-7859.004.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10441) libhdfs++: HA namenode support

2016-06-21 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10441:
---
Attachment: HDFS-10441.HDFS-8707.005.patch

New patch:
-can take a nameservice in FileSystem::Connect
-got rid of distinct enum for server exception types
-prepend hdfs:// to namenode authorities to use URI class
-fix bug in default option: max retries defaults to -1, would get cast to 
uint64_t and get stuck doing retries
-rename FmtURI method
-added status strings to some error messages, still a few more to go
-follow the _locked naming convention
-read HA options from config and use them in the retry policy
-support for connection timeout during connection to a HA node: default to 
immediate failover

Outstanding work:
-still need to move stuff out of the hdfs_configuration class
-get rid of duplicate data structures in options (services, ha_enabled_, 
ha_namenodes_) and just keep Options::services.  This one is trickier than it 
sounds; to do it right we either need to resolve endpoints for the HA namenodes 
in filesystem, or pass the namenode/nameservices down to the RPC engine and do 
DNS lookup there (instead of twice).  It spans enough of the code that I spend 
a good amount of time resolving merge conflicts so I'd like to do these changes 
in another JIRA if others thing that is acceptable.  A proper refactor of this 
will fix a few issues:
1) better looking code
2) full support for multiple HA clusters
3) get rid of extra level of DNS lookup

-still some minor cleanup


> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, 
> HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch, 
> HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10543) hdfsRead read stops at block boundary

2016-06-21 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-10543:
---
Attachment: HDFS-10543.HDFS-8707.003.patch

Fix whitespace issue.

> hdfsRead read stops at block boundary
> -
>
> Key: HDFS-10543
> URL: https://issues.apache.org/jira/browse/HDFS-10543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Xiaowei Zhu
>Assignee: James Clampffer
> Attachments: HDFS-10543.HDFS-8707.000.patch, 
> HDFS-10543.HDFS-8707.001.patch, HDFS-10543.HDFS-8707.002.patch, 
> HDFS-10543.HDFS-8707.003.patch
>
>
> Reproducer:
> char *buf2 = new char[file_info->mSize];
>   memset(buf2, 0, (size_t)file_info->mSize);
>   int ret = hdfsRead(fs, file, buf2, file_info->mSize);
>   delete [] buf2;
>   if(ret != file_info->mSize) {
> std::stringstream ss;
> ss << "tried to read " << file_info->mSize << " bytes. but read " << 
> ret << " bytes";
> ReportError(ss.str());
> hdfsCloseFile(fs, file);
> continue;
>   }
> When it runs with a file ~1.4GB large, it will return an error like "tried to 
> read 146890 bytes. but read 134217728 bytes". The HDFS cluster it runs 
> against has a block size of 134217728 bytes. So it seems hdfsRead will stop 
> at a block boundary. Looks like a regression. We should add retry to continue 
> reading cross blocks in case of files w/ multiple blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10543) hdfsRead read stops at block boundary

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342502#comment-15342502
 ] 

Hadoop QA commented on HDFS-10543:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
56s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 8s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 6m 5s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91 with JDK 
v1.8.0_91 generated 1 new + 29 unchanged - 0 fixed = 30 total (was 29) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 6m 6s {color} | 
{color:red} hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_101 with JDK 
v1.7.0_101 generated 1 new + 29 unchanged - 0 fixed = 30 total (was 29) {color} 
|
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 39s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 9s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 35s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812260/HDFS-10543.HDFS-8707.002.patch
 |
| JIRA Issue | HDFS-10543 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux bf918c426b67 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 71af408 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15848/artifact/patchprocess/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_91.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15848/artifact/patchprocess/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_101.txt
 |
| whitespace | 

[jira] [Commented] (HDFS-10553) DiskBalancer: Rename Tools/DiskBalancer class to Tools/DiskBalancerCLI

2016-06-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342490#comment-15342490
 ] 

Anu Engineer commented on HDFS-10553:
-

>From the mail that points out this issue from [~szetszwo]

- There are a few TODOs in the code.
- Tried the  help command "hdfs diskbalancer -help plan".  There is a typo 
"wetolerate" in --thresholdPercentage.  Also, we should mention the unit for 
--bandwidth.
- We should avoid using the same class name such as DiskBalancer, which is 
defined in both the datanode and tools packages.  It may be better to call it 
DiskBalancerCli for the one in tools.
- I still think that it is better to use weighted mean and weighted variance in 
the calculation.

> DiskBalancer: Rename Tools/DiskBalancer class to Tools/DiskBalancerCLI
> --
>
> Key: HDFS-10553
> URL: https://issues.apache.org/jira/browse/HDFS-10553
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: HDFS-1312
>
>
> Rename the Tools/DiskBalancer, since we have server/DiskBalancer class. This 
> is confusing when reading code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10473) Allow only suitable storage policies to be set on striped files

2016-06-21 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-10473:
---
Attachment: HDFS-10473-04.patch

[~jingzhao], I have updated the patch as per the comments. Please review it.

> Allow only suitable storage policies to be set on striped files
> ---
>
> Key: HDFS-10473
> URL: https://issues.apache.org/jira/browse/HDFS-10473
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10473-01.patch, HDFS-10473-02.patch, 
> HDFS-10473-03.patch, HDFS-10473-04.patch
>
>
> Currently some of existing storage policies are not suitable for striped 
> layout files.
> This JIRA proposes to reject setting storage policy on striped files.
> Another thought is to allow only suitable storage polices like ALL_SSD.
> Since the major use case of EC is for cold data, this may not be at high 
> importance. So, I am ok to reject setting storage policy on striped files at 
> this stage. Please suggest if others have some thoughts on this.
> Thanks [~zhz] for offline discussion on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-06-21 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated HDFS-9890:

Status: Patch Available  (was: Reopened)

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch, 
> HDFS-9890.HDFS-8707.007.patch, HDFS-9890.HDFS-8707.008.patch, 
> HDFS-9890.HDFS-8707.009.patch, hs_err_pid26832.log, hs_err_pid4944.log
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10552:

Attachment: HDFS-10552-HDFS-1312.003.patch

I haved added new test case that tests this code from 
the point of view of CLI. We do have a test already for this
case, 

{code}
  @Test
  public void testQueryPlanWithoutSubmit() throws Exception {
RpcTestHelper rpcTestHelper = new RpcTestHelper().invoke();
DataNode dataNode = rpcTestHelper.getDataNode();

DiskBalancerWorkStatus status = dataNode.queryDiskBalancerPlan();
Assert.assertTrue(status.getResult() == NO_PLAN);
  }
  {code}

  but unfortunately the above test did not catch the protobuf encoding error.
I could have asserted that this string is not null in the same test, but still 
would not be a full end-to-end test case. So I have opted to write a test case 
from the CLI point of view. It adds approximately 2 seconds to test run, since 
we are starting up a new MiniDFSCluster.
 
  



> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch, HDFS-10552-HDFS-1312.003.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10550) DiskBalancer: fix issue of order dependency in iteration in ReportCommand test

2016-06-21 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342440#comment-15342440
 ] 

Anu Engineer commented on HDFS-10550:
-

I have requested for a manual build on this.

https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-HDFS-Build/15849/console


> DiskBalancer: fix issue of order dependency in iteration in ReportCommand test
> --
>
> Key: HDFS-10550
> URL: https://issues.apache.org/jira/browse/HDFS-10550
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10550-HDFS-1312.000.patch, 
> HDFS-10550-HDFS-1312.001.patch
>
>
> TestDiskBalancerCommand#testReportNode assumed order of result entries 
> returned, however, DiskBalancerDataNode#volumeSets and 
> DiskBalancerVolumeSet#volumes have no guarantee on the order since they are 
> initialized as HashMap. The test should be fixed by not assuming the order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-21 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10460:

Attachment: HDFS-10460-02.patch

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -
>
> Key: HDFS-10460
> URL: https://issues.apache.org/jira/browse/HDFS-10460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10460-00.patch, HDFS-10460-01.patch, 
> HDFS-10460-02.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10553) DiskBalancer: Rename Tools/DiskBalancer class to Tools/DiskBalancerCLI

2016-06-21 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10553:
---

 Summary: DiskBalancer: Rename Tools/DiskBalancer class to 
Tools/DiskBalancerCLI
 Key: HDFS-10553
 URL: https://issues.apache.org/jira/browse/HDFS-10553
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: HDFS-1312
Reporter: Anu Engineer
Assignee: Anu Engineer
Priority: Minor
 Fix For: HDFS-1312


Rename the Tools/DiskBalancer, since we have server/DiskBalancer class. This is 
confusing when reading code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-21 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342418#comment-15342418
 ] 

Rakesh R commented on HDFS-10460:
-

Makes sense to me. I've tried addressing the comments and uploaded new patch, 
kindly review it again.

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -
>
> Key: HDFS-10460
> URL: https://issues.apache.org/jira/browse/HDFS-10460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10460-00.patch, HDFS-10460-01.patch, 
> HDFS-10460-02.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10543) hdfsRead read stops at block boundary

2016-06-21 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-10543:
---
Attachment: HDFS-10543.HDFS-8707.002.patch

HDFS-10543.HDFS-8707.002.patch fixed the issue that after moving the retry 
block to PositionRead, it only reads 1 block again.

> hdfsRead read stops at block boundary
> -
>
> Key: HDFS-10543
> URL: https://issues.apache.org/jira/browse/HDFS-10543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Xiaowei Zhu
>Assignee: James Clampffer
> Attachments: HDFS-10543.HDFS-8707.000.patch, 
> HDFS-10543.HDFS-8707.001.patch, HDFS-10543.HDFS-8707.002.patch
>
>
> Reproducer:
> char *buf2 = new char[file_info->mSize];
>   memset(buf2, 0, (size_t)file_info->mSize);
>   int ret = hdfsRead(fs, file, buf2, file_info->mSize);
>   delete [] buf2;
>   if(ret != file_info->mSize) {
> std::stringstream ss;
> ss << "tried to read " << file_info->mSize << " bytes. but read " << 
> ret << " bytes";
> ReportError(ss.str());
> hdfsCloseFile(fs, file);
> continue;
>   }
> When it runs with a file ~1.4GB large, it will return an error like "tried to 
> read 146890 bytes. but read 134217728 bytes". The HDFS cluster it runs 
> against has a block size of 134217728 bytes. So it seems hdfsRead will stop 
> at a block boundary. Looks like a regression. We should add retry to continue 
> reading cross blocks in case of files w/ multiple blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7597) DNs should not open new NN connections when webhdfs clients seek

2016-06-21 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-7597:

Assignee: Daryn Sharp

> DNs should not open new NN connections when webhdfs clients seek
> 
>
> Key: HDFS-7597
> URL: https://issues.apache.org/jira/browse/HDFS-7597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7597.01.patch, HDFS-7597.patch, HDFS-7597.patch, 
> HDFS-7597.patch
>
>
> Webhdfs seeks involve closing the current connection, and reissuing a new 
> open request with the new offset.  The RPC layer caches connections so the DN 
> keeps a lingering connection open to the NN.  Connection caching is in part 
> based on UGI.  Although the client used the same token for the new offset 
> request, the UGI is different which forces the DN to open another unnecessary 
> connection to the NN.
> A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7959) WebHdfs logging is missing on Datanode

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342211#comment-15342211
 ] 

Hadoop QA commented on HDFS-7959:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
19s {color} | {color:green} root: The patch generated 0 new + 13 unchanged - 1 
fixed = 13 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 51s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 105m 4s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812210/HDFS-7959.2.trunk.patch
 |
| JIRA Issue | HDFS-7959 |
| Optional Tests |  asflicense  mvnsite  unit  compile  javac  javadoc  
mvninstall  findbugs  checkstyle  |
| uname | Linux 1e5c2e9133b2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e15cd43 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Commented] (HDFS-8668) Erasure Coding: revisit buffer used for encoding and decoding.

2016-06-21 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342195#comment-15342195
 ] 

Kai Zheng commented on HDFS-8668:
-

I got it. Yeah, to use the native coder at full performance provided by 
HADOOP-11540 it will need this work and also some other related tasks. These 
should be in for the 3.0 release otherwise the performance is badly impacted. 
Will see if it's doable to move on this even when the ISA-L coder isn't in yet.

> Erasure Coding: revisit buffer used for encoding and decoding.
> --
>
> Key: HDFS-8668
> URL: https://issues.apache.org/jira/browse/HDFS-8668
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Yi Liu
>Assignee: Kai Zheng
> Attachments: HDFS-8668-v1.patch, HDFS-8668-v2.patch
>
>
> For encoding and decoding buffers, currently some places use java heap 
> ByteBuffer,  some use direct byteBUffer, and some use java byte array.  If 
> the coder implementation is native, we should use direct ByteBuffer. This 
> jira is to  revisit all encoding/decoding buffers and improve them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342179#comment-15342179
 ] 

Lei (Eddy) Xu commented on HDFS-10552:
--

[~anu] Can we add a test to enforce the behavior ?

Thanks.

> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-21 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342175#comment-15342175
 ] 

Kai Zheng commented on HDFS-10460:
--

bq. can continue setting block.setNumBytes(getRemaining()); logic in Replicated 
and Striped block. Then will consider reconstruction as a special case and will 
create reconBlockGroup object with actualNumBytes, like I'm doing in the 
current patch. 
Yeah I understand your opinion. I thought both approaches should work because 
they're essentially the same. You're looking at this from another perspective 
and want this to be consistent with replicated block. It sounds good. On the 
other hand, checksum for striping block group is a newly added function and 
protocol, and in this, we need to pass the exact block group object to datanode 
side for reconstruction need, as we did for erasure coding worker, and to do 
the checksuming in datanode side, we also need to pass the requestedLength. 
Doing this would look the related codes and protocol look more naturable and 
readable. IMO, {{requestedLength}} would be better understandable than 
{{actualNumBytes}}, and for the latter, you would have to explain about it, and 
you also need to setNumBytes back and forth against the block group object.

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -
>
> Key: HDFS-10460
> URL: https://issues.apache.org/jira/browse/HDFS-10460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10460-00.patch, HDFS-10460-01.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8901) Use ByteBuffer in striping positional read

2016-06-21 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8901:

Status: Patch Available  (was: Open)

> Use ByteBuffer in striping positional read
> --
>
> Key: HDFS-8901
> URL: https://issues.apache.org/jira/browse/HDFS-8901
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-8901-v10.patch, HDFS-8901-v2.patch, 
> HDFS-8901-v3.patch, HDFS-8901-v4.patch, HDFS-8901-v5.patch, 
> HDFS-8901-v6.patch, HDFS-8901-v7.patch, HDFS-8901-v8.patch, 
> HDFS-8901-v9.patch, initial-poc.patch
>
>
> Native erasure coder prefers to direct ByteBuffer for performance 
> consideration. To prepare for it, this change uses ByteBuffer through the 
> codes in implementing striping position read. It will also fix avoiding 
> unnecessary data copying between striping read chunk buffers and decode input 
> buffers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8901) Use ByteBuffer in striping positional read

2016-06-21 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8901:

Status: Open  (was: Patch Available)

> Use ByteBuffer in striping positional read
> --
>
> Key: HDFS-8901
> URL: https://issues.apache.org/jira/browse/HDFS-8901
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HDFS-8901-v10.patch, HDFS-8901-v2.patch, 
> HDFS-8901-v3.patch, HDFS-8901-v4.patch, HDFS-8901-v5.patch, 
> HDFS-8901-v6.patch, HDFS-8901-v7.patch, HDFS-8901-v8.patch, 
> HDFS-8901-v9.patch, initial-poc.patch
>
>
> Native erasure coder prefers to direct ByteBuffer for performance 
> consideration. To prepare for it, this change uses ByteBuffer through the 
> codes in implementing striping position read. It will also fix avoiding 
> unnecessary data copying between striping read chunk buffers and decode input 
> buffers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile

2016-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342084#comment-15342084
 ] 

Hadoop QA commented on HDFS-10534:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 
new + 585 unchanged - 2 fixed = 587 total (was 587) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 15s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 41s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812191/HDFS-10534.04.patch |
| JIRA Issue | HDFS-10534 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 4248bfa141a5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e15cd43 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15846/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15846/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15846/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  

[jira] [Commented] (HDFS-10552) DiskBalancer "-query" results in NPE if no plan for the node

2016-06-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15342066#comment-15342066
 ] 

Arpit Agarwal commented on HDFS-10552:
--

+1 for the v2 patch pending Jenkins.

> DiskBalancer "-query" results in NPE if no plan for the node
> 
>
> Key: HDFS-10552
> URL: https://issues.apache.org/jira/browse/HDFS-10552
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-1312
>Reporter: Lei (Eddy) Xu
>Assignee: Anu Engineer
>Priority: Critical
> Attachments: HDFS-10552-HDFS-1312.001.patch, 
> HDFS-10552-HDFS-1312.002.patch
>
>
> {code}
> 16/06/20 11:50:16 INFO command.Command: Executing "query plan" command.
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$QueryPlanStatusResponseProto$Builder.setPlanID(ClientDatanodeProtocolProtos.java:12782)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.queryDiskBalancerPlan(ClientDatanodeProtocolServerSideTranslatorPB.java:340)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:17513)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-21 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341960#comment-15341960
 ] 

Rakesh R edited comment on HDFS-10460 at 6/21/16 3:24 PM:
--

Thanks [~drankye] for the detailed explanation. I have analysed this approach. 
I could see its little tricky logic.

We have two cases:

case-1) Say all DNs are working fine and no failure. For calculating the 
checksum it needs {{requestedNumBytes}} and this is used to build the exact 
block length from the {{blockGroup}}. At the beginning, it is setting 
{{block.setNumBytes(getRemaining())}} the requestedNumBytes here and which will 
inturn passed to the below logic to construct the block with required number of 
bytes. If we leave the numBytes unchanged then this logic will return wrong 
number of bytes for reading the checksum data.
{code}
ExtendedBlock block = StripedBlockUtil.constructInternalBlock(
  blockGroup, ecPolicy.getCellSize(), numDataUnits, idx);
{code}

case-2) With few DN failures. For reconstructing the block it needs the 
{{actualNumBytes}} and then recalculate the requestedNumBytes checksum data.
{code}
  ExtendedBlock reconBlockGroup = new ExtendedBlock(blockGroup);
  reconBlockGroup.setNumBytes(actualNumBytes);
{code}

What I'm trying to explain is, 
- in case-1: it needs {{blockGroup}} object with {{requestedNumBytes}}
- in case-2: it needs {{reconBlockGroup}} object with {{actualNumBytes}}
So in either way there is a need of dummy object with either requestedNumBytes 
or actualNumBytes, right. 

IMHO, can continue setting {{block.setNumBytes(getRemaining());}} logic in 
Replicated and Striped block. Then will consider reconstruction as a special 
case and will create {{reconBlockGroup}} object with actualNumBytes, like I'm 
doing in the current patch. Whats your opinion?


was (Author: rakeshr):
Thanks [~drankye] for the detailed explanation. I have analysed this approach. 
I could see its little tricky logic.

We have two cases:

case-1) Say all DNs are working fine and no failure. For calculating the 
checksum it needs {{requestedNumBytes}} and this is used to build the exact 
block length from the {{blockGroup}}. At the beginning, it is setting 
{{block.setNumBytes(getRemaining())}} the requestedNumBytes here and which will 
inturn passed to the below logic to construct the block with required number of 
bytes. If we leave the numBytes unchanged then this logic will return wrong 
number of bytes for reading the checksum data.
{code}
ExtendedBlock block = StripedBlockUtil.constructInternalBlock(
  blockGroup, ecPolicy.getCellSize(), numDataUnits, idx);
{code}

case-2) With few DN failures. For reconstructing the block it needs the 
{{actualNumBytes}} and then recalculate the requestedNumBytes checksum data.
{code}
  ExtendedBlock reconBlockGroup = new ExtendedBlock(blockGroup);
  reconBlockGroup.setNumBytes(actualNumBytes);
{code}

What I'm trying to explain is, 
- in case-1: it needs {{blockGroup}} object with {{requestedNumBytes}}
- in case-2: it needs {{reconBlockGroup}} object with {{requestedNumBytes}}
So in either way there is a need of dummy object with requestedNumBytes. 

IMHO, can continue setting {{block.setNumBytes(getRemaining());}} logic in 
Replicated and Striped block. Then will consider reconstruction as a special 
case and will create {{reconBlockGroup}} object with actualNumBytes, like I'm 
doing in the current patch. Whats your opinion?

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -
>
> Key: HDFS-10460
> URL: https://issues.apache.org/jira/browse/HDFS-10460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10460-00.patch, HDFS-10460-01.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-21 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341965#comment-15341965
 ] 

Rakesh R commented on HDFS-10460:
-

bq. In the following change, requestedNumBytes should be actualNumBytes, right.
oops, you are right. Will take care in next patch.

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -
>
> Key: HDFS-10460
> URL: https://issues.apache.org/jira/browse/HDFS-10460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10460-00.patch, HDFS-10460-01.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-21 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10460:

Comment: was deleted

(was: oops, you are right. Will take care in next patch.)

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -
>
> Key: HDFS-10460
> URL: https://issues.apache.org/jira/browse/HDFS-10460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10460-00.patch, HDFS-10460-01.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7959) WebHdfs logging is missing on Datanode

2016-06-21 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7959:
-
Attachment: HDFS-7959.2.trunk.patch
HDFS-7959.2.branch-2.patch

The new patches address the review comments.

> WebHdfs logging is missing on Datanode
> --
>
> Key: HDFS-7959
> URL: https://issues.apache.org/jira/browse/HDFS-7959
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7959.1.branch-2.patch, HDFS-7959.1.trunk.patch, 
> HDFS-7959.2.branch-2.patch, HDFS-7959.2.trunk.patch, 
> HDFS-7959.branch-2.patch, HDFS-7959.patch, HDFS-7959.patch, HDFS-7959.patch, 
> HDFS-7959.trunk.patch
>
>
> After the conversion to netty, webhdfs requests are not logged on datanodes. 
> The existing jetty log only logs the non-webhdfs requests that come through 
> the internal proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-21 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341962#comment-15341962
 ] 

Rakesh R commented on HDFS-10460:
-

oops, you are right. Will take care in next patch.

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -
>
> Key: HDFS-10460
> URL: https://issues.apache.org/jira/browse/HDFS-10460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10460-00.patch, HDFS-10460-01.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10460) Erasure Coding: Recompute block checksum for a particular range less than file size on the fly by reconstructing missed block

2016-06-21 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341960#comment-15341960
 ] 

Rakesh R commented on HDFS-10460:
-

Thanks [~drankye] for the detailed explanation. I have analysed this approach. 
I could see its little tricky logic.

We have two cases:

case-1) Say all DNs are working fine and no failure. For calculating the 
checksum it needs {{requestedNumBytes}} and this is used to build the exact 
block length from the {{blockGroup}}. At the beginning, it is setting 
{{block.setNumBytes(getRemaining())}} the requestedNumBytes here and which will 
inturn passed to the below logic to construct the block with required number of 
bytes. If we leave the numBytes unchanged then this logic will return wrong 
number of bytes for reading the checksum data.
{code}
ExtendedBlock block = StripedBlockUtil.constructInternalBlock(
  blockGroup, ecPolicy.getCellSize(), numDataUnits, idx);
{code}

case-2) With few DN failures. For reconstructing the block it needs the 
{{actualNumBytes}} and then recalculate the requestedNumBytes checksum data.
{code}
  ExtendedBlock reconBlockGroup = new ExtendedBlock(blockGroup);
  reconBlockGroup.setNumBytes(actualNumBytes);
{code}

What I'm trying to explain is, 
- in case-1: it needs {{blockGroup}} object with {{requestedNumBytes}}
- in case-2: it needs {{reconBlockGroup}} object with {{requestedNumBytes}}
So in either way there is a need of dummy object with requestedNumBytes. 

IMHO, can continue setting {{block.setNumBytes(getRemaining());}} logic in 
Replicated and Striped block. Then will consider reconstruction as a special 
case and will create {{reconBlockGroup}} object with actualNumBytes, like I'm 
doing in the current patch. Whats your opinion?

> Erasure Coding: Recompute block checksum for a particular range less than 
> file size on the fly by reconstructing missed block
> -
>
> Key: HDFS-10460
> URL: https://issues.apache.org/jira/browse/HDFS-10460
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10460-00.patch, HDFS-10460-01.patch
>
>
> This jira is HDFS-9833 follow-on task to address reconstructing block and 
> then recalculating block checksum for a particular range query.
> For example,
> {code}
> // create a file 'stripedFile1' with fileSize = cellSize * numDataBlocks = 
> 65536 * 6 = 393216
> FileChecksum stripedFileChecksum = getFileChecksum(stripedFile1, 10, true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-06-21 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-9890:
--
Attachment: HDFS-9890.HDFS-8707.009.patch

HDFS-9890.HDFS-8707.009.patch uses a new flag LIBHDFSPP_SIMULATE_ERROR_DISABLED.

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch, 
> HDFS-9890.HDFS-8707.007.patch, HDFS-9890.HDFS-8707.008.patch, 
> HDFS-9890.HDFS-8707.009.patch, hs_err_pid26832.log, hs_err_pid4944.log
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10534) NameNode WebUI should display DataNode usage rate with a certain percentile

2016-06-21 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-10534:
--
Attachment: HDFS-10534.04.patch

> NameNode WebUI should display DataNode usage rate with a certain percentile
> ---
>
> Key: HDFS-10534
> URL: https://issues.apache.org/jira/browse/HDFS-10534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, ui
>Reporter: Zhe Zhang
>Assignee: Kai Sasaki
> Attachments: HDFS-10534.01.patch, HDFS-10534.02.patch, 
> HDFS-10534.03.patch, HDFS-10534.04.patch
>
>
> In addition of *Min/Median/Max*, another meaningful metric for cluster 
> balance is DN usage rate at a certain percentile (e.g. 90 or 95). We should 
> add a config option, and another filed on NN WebUI, to display this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2016-06-21 Thread Xinwei Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341759#comment-15341759
 ] 

Xinwei Qin  commented on HDFS-7859:
---

[~zhz], it's better to have this in 3.0, rebasing and perfect this will be done 
ASAP this week.

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Xinwei Qin 
>  Labels: BB2015-05-TBR, hdfs-ec-3.0-must-do
> Attachments: HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.003.patch, 
> HDFS-7859.001.patch, HDFS-7859.002.patch, HDFS-7859.004.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7597) DNs should not open new NN connections when webhdfs clients seek

2016-06-21 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341751#comment-15341751
 ] 

Daryn Sharp commented on HDFS-7597:
---

+1  Thanks for fixing the tests!

> DNs should not open new NN connections when webhdfs clients seek
> 
>
> Key: HDFS-7597
> URL: https://issues.apache.org/jira/browse/HDFS-7597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7597.01.patch, HDFS-7597.patch, HDFS-7597.patch, 
> HDFS-7597.patch
>
>
> Webhdfs seeks involve closing the current connection, and reissuing a new 
> open request with the new offset.  The RPC layer caches connections so the DN 
> keeps a lingering connection open to the NN.  Connection caching is in part 
> based on UGI.  Although the client used the same token for the new offset 
> request, the UGI is different which forces the DN to open another unnecessary 
> connection to the NN.
> A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9513) DataNodeManager#getDataNodeStorageInfos not backward compatibility

2016-06-21 Thread DENG FEI (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15341692#comment-15341692
 ] 

DENG FEI commented on HDFS-9513:


[~szetszwo]please do a review if you have time,it's run our cluster for some 
months.And normally, datanodes will follow namenode's version,it's compatible 
yet.

> DataNodeManager#getDataNodeStorageInfos not backward compatibility
> --
>
> Key: HDFS-9513
> URL: https://issues.apache.org/jira/browse/HDFS-9513
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, namenode
>Affects Versions: 2.2.0, 2.7.1
> Environment:  2.2.0 HDFS Client &2.7.1 HDFS Cluster
>Reporter: DENG FEI
>Assignee: DENG FEI
>Priority: Blocker
> Attachments: HDFS-9513-20160621.patch, patch.HDFS-9513.20151207, 
> patch.HDFS-9513.20151216-2.7.2
>
>
> We is upgraded our new HDFS cluster to 2.7.1,but we YARN cluster is 
> 2.2.0(8000+,it's too hard to upgrade as soon as HDFS cluster).
> The compatible case happened  datasteamer do pipeline recovery, the NN need 
> DN's storageInfo to update pipeline, and the storageIds is pair of 
> pipleline's DN,but HDFS support storage type feature from 2.3.0 
> [HDFS-2832|https://issues.apache.org/jira/browse/HDFS-2832], older version 
> not have storageId ,although the protobuf serialization make the protocol 
> compatible,but the client  will throw remote exception as 
> ArrayIndexOutOfBoundsException.
> 
> the exception stack is below:
> {noformat}
> 2015-12-05 20:26:38,291 ERROR [Thread-4] org.apache.hadoop.hdfs.DFSClient: 
> Failed to close file XXX
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6404)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:892)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:997)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1066)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy10.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updatePipeline(ClientNamenodeProtocolTranslatorPB.java:801)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy11.updatePipeline(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1047)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >