[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806828#comment-15806828
 ] 

Hadoop QA commented on HDFS-10899:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-hdfs-project: The patch generated 15 new 
+ 1699 unchanged - 3 fixed = 1714 total (was 1702) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
12s{color} | {color:green} The patch generated 0 new + 106 unchanged - 1 fixed 
= 106 total (was 107) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846150/HDFS-10899.05.patch |
| Optional Tests |  asflicense  

[jira] [Commented] (HDFS-11259) Update fsck to display maintenance state info

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806700#comment-15806700
 ] 

Hadoop QA commented on HDFS-11259:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 130 unchanged - 1 fixed = 133 total (was 131) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 64m  
3s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11259 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846145/HDFS-11259.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0f00ec9b4204 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0203164 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18101/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18101/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18101/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update fsck to display maintenance state info
> -
>
> Key: HDFS-11259
> URL: https://issues.apache.org/jira/browse/HDFS-11259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>

[jira] [Commented] (HDFS-11299) Support multiple Datanode File IO hooks

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806668#comment-15806668
 ] 

Hadoop QA commented on HDFS-11299:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 45s{color} | {color:orange} root: The patch generated 33 new + 578 unchanged 
- 1 fixed = 611 total (was 579) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 16 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 45s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 
59s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestClusterTopology |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11299 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846143/HDFS-11299.001.patch |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  findbugs  checkstyle  |
| uname | Linux 9d62bb0af2ce 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71a4acf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18100/artifact/patchprocess/diff-checkstyle-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18100/artifact/patchprocess/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18100/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-11299) Support multiple Datanode File IO hooks

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806648#comment-15806648
 ] 

Hadoop QA commented on HDFS-11299:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 42s{color} | {color:orange} root: The patch generated 33 new + 579 unchanged 
- 0 fixed = 612 total (was 579) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 16 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 87m  
2s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11299 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846143/HDFS-11299.001.patch |
| Optional Tests |  asflicense  mvnsite  compile  javac  javadoc  mvninstall  
unit  findbugs  checkstyle  |
| uname | Linux 57cc8a608c24 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71a4acf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18098/artifact/patchprocess/diff-checkstyle-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18098/artifact/patchprocess/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18098/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18098/console |
| Powered by | Apache 

[jira] [Commented] (HDFS-11273) Move TransferFsImage#doGetUrl function to a Util class

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806605#comment-15806605
 ] 

Hadoop QA commented on HDFS-11273:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 129 unchanged - 4 fixed = 132 total (was 133) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11273 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846136/HDFS-11273.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1e91904b6f0e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71a4acf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18099/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18099/artifact/patchprocess/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18099/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18099/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18099/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HDFS-11301) Double wrapping over RandomAccessFile in LocalReplicaInPipeline#createStreams

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806573#comment-15806573
 ] 

Hadoop QA commented on HDFS-11301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11301 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846107/HDFS-11301.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 40ba307d4ae8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71a4acf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18097/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18097/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18097/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Double wrapping over RandomAccessFile in 

[jira] [Updated] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2017-01-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10899:
-
Attachment: HDFS-10899.05.patch

Thanks a lot for the prompt review, Dilaver.

Patch addressed the comments. Also moved the reencryptionStatus variable from 
EZM onto ReencryptionHandler, to have the info more contained.

Fixed the unit test failures, and cleaned up TestEncryptionZones a little bit.

> Add functionality to re-encrypt EDEKs.
> --
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.wip.2.patch, HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9391) Update webUI/JMX to display maintenance state info

2017-01-06 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806430#comment-15806430
 ] 

Manoj Govindassamy commented on HDFS-9391:
--

TestDataNodeVolumeFailureReporting passes locally for me. Jenkins test failure 
doesn't look like related to this patch.

> Update webUI/JMX to display maintenance state info
> --
>
> Key: HDFS-9391
> URL: https://issues.apache.org/jira/browse/HDFS-9391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Ming Ma
>Assignee: Manoj Govindassamy
> Attachments: HDFS-9391-MaintenanceMode-WebUI.pdf, HDFS-9391.01.patch, 
> HDFS-9391.02.patch, HDFS-9391.03.patch, Maintenance webUI.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9119) Discrepancy between edit log tailing interval and RPC timeout for transitionToActive

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806416#comment-15806416
 ] 

Hadoop QA commented on HDFS-9119:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 71m  
9s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-9119 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12766125/HDFS-9119.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3f5564989cb0 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71a4acf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18096/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18096/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Discrepancy between edit log tailing interval and RPC timeout for 
> transitionToActive
> 
>
> Key: HDFS-9119
> URL: https://issues.apache.org/jira/browse/HDFS-9119
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9119.00.patch
>
>
> 

[jira] [Updated] (HDFS-11259) Update fsck to display maintenance state info

2017-01-06 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11259:
--
Attachment: HDFS-11259.02.patch

Thanks [~eddyxu] and [~dilaver] for the review. Attached v02 patch 
incorporating test review comments.

> Update fsck to display maintenance state info
> -
>
> Key: HDFS-11259
> URL: https://issues.apache.org/jira/browse/HDFS-11259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11259.01.patch, HDFS-11259.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8538) Change the default volume choosing policy to AvailableSpaceVolumeChoosingPolicy

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806399#comment-15806399
 ] 

Hadoop QA commented on HDFS-8538:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 79 unchanged - 0 fixed = 80 total (was 79) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}125m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestHDFSFileSystemContract |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
|   | hadoop.hdfs.TestEncryptedTransfer |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
|   | hadoop.hdfs.server.namenode.TestQuotaWithStripedBlocks |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.web.TestWebHDFSForHA |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.fs.viewfs.TestViewFileSystemWithAcls |
|   | hadoop.hdfs.server.datanode.TestDataNodeECN |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStream |
|   | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.TestGenericRefresh |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSeek |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot |
|   | hadoop.fs.viewfs.TestViewFsFileStatusHdfs |
|   | hadoop.hdfs.TestWriteConfigurationToDFS |
|   | hadoop.cli.TestAclCLIWithPosixAclInheritance |
|   | 

[jira] [Commented] (HDFS-8516) The 'hdfs crypto -listZones' should not print an extra newline at end of output

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806348#comment-15806348
 ] 

Hadoop QA commented on HDFS-8516:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 32s{color} | {color:orange} root: The patch generated 12 new + 2 unchanged - 
0 fixed = 14 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
44s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-8516 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12736823/HDFS-8516.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 36ad480abfce 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2977bc6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18066/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18066/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-10620) StringBuilder created and appended even if logging is disabled

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806350#comment-15806350
 ] 

Hadoop QA commented on HDFS-10620:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
56s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} branch-2 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
23s{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HDFS-10620 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820079/HDFS-10620-branch-2.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 953d2df4ed32 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 81da7d1 |
| Default Java | 1.7.0_121 |
| Multi-JDK versions |  

[jira] [Commented] (HDFS-8959) Provide an iterator-based API for listing all the snapshottable directories

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806342#comment-15806342
 ] 

Hadoop QA commented on HDFS-8959:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
827 unchanged - 0 fixed = 829 total (was 827) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-8959 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787023/HDFS-8959-02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 9ea79b885248 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71a4acf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18088/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 

[jira] [Commented] (HDFS-6515) testPageRounder (org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache)

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806329#comment-15806329
 ] 

Hadoop QA commented on HDFS-6515:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
17s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-6515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12677586/HDFS-6515-2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 286e68157eeb 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71a4acf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18086/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18086/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18086/console |
| 

[jira] [Commented] (HDFS-6525) FsShell supports HDFS TTL

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806320#comment-15806320
 ] 

Hadoop QA commented on HDFS-6525:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
56s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
33s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 33s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 38s{color} | {color:orange} root: The patch generated 8 new + 1 unchanged - 
0 fixed = 9 total (was 1) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-6525 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12652123/HDFS-6525.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux f86c80b6a5d0 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71a4acf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18095/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18095/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 

[jira] [Updated] (HDFS-11299) Support multiple Datanode File IO hooks

2017-01-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11299:
--
Attachment: HDFS-11299.001.patch

Thank you [~arpitagarwal] for the review. I have addressed your comments in 
patch v01.

> Support multiple Datanode File IO hooks
> ---
>
> Key: HDFS-11299
> URL: https://issues.apache.org/jira/browse/HDFS-11299
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11299.000.patch, HDFS-11299.001.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of choosing only one hook using Config parameters, we want to add two 
> separate hooks - one for profiling and one for fault injection. The fault 
> injection hook will be useful for testing purposes. 
> This jira only introduces support for fault injection hook. The 
> implementation for that will come later on.
> Also, now Default and Counting FileIOEvents would not be needed as we can 
> control enabling the profiling and fault injection hooks using config 
> parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11285) Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never transition to (Dead, DECOMMISSIONED)

2017-01-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806306#comment-15806306
 ] 

Andrew Wang commented on HDFS-11285:


Thanks for the diagram, that's something we should put as a code comment in 
DecommissionManager :)

Like you say, it looks like we don't have a 5->6 transition. 
BlockManager#isNodeHealthyForDecommissionOrMaintenance requires the node to be 
alive, and actually will log a WARN with the procedure you've been running with 
5->4->6. So at least it seems like this behavior is intentional.

I'm betting though that the straggler blocks on the DN are caused by 
open-for-write files though, and I'd prefer to solve that problem rather than 
adding a 5->6 transition. Could you run {{hdfs fsck}} with {{-openforwrite}} 
and {{-files -blocks -locations}} to confirm? Also check in the NN logs since 
we should be printing information about what blocks are preventing 
decommissioning.

> Dead DataNodes keep a long time in (Dead, DECOMMISSION_INPROGRESS), and never 
> transition to (Dead, DECOMMISSIONED)
> --
>
> Key: HDFS-11285
> URL: https://issues.apache.org/jira/browse/HDFS-11285
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Lantao Jin
> Attachments: DecomStatus.png
>
>
> We have seen the use case of decommissioning DataNodes that are already dead 
> or unresponsive, and not expected to rejoin the cluster. In a large cluster, 
> we met more than 100 nodes were dead, decommissioning and their {panel} Under 
> replicated blocks {panel} {panel} Blocks with no live replicas {panel} were 
> all ZERO. Actually It has been fixed in 
> [HDFS-7374|https://issues.apache.org/jira/browse/HDFS-7374]. After that, we 
> can refreshNode twice to eliminate this case. But, seems this patch missed 
> after refactor[HDFS-7411|https://issues.apache.org/jira/browse/HDFS-7411]. We 
> are using a Hadoop version based 2.7.1 and only below operations can 
> transition the status from {panel} Dead, DECOMMISSION_INPROGRESS {panel} to 
> {panel} Dead, DECOMMISSIONED {panel}:
> # Retire it from hdfs-exclude
> # refreshNodes
> # Re-add it to hdfs-exclude
> # refreshNodes
> So, why the code removed after refactor in the new DecommissionManager?
> {code:java}
> if (!node.isAlive) {
>   LOG.info("Dead node " + node + " is decommissioned immediately.");
>   node.setDecommissioned();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9935) Remove LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD and unused import from HdfsServerConstants

2017-01-06 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-9935:

Target Version/s: 3.0.0-alpha2

I set the target version as 3.0. Correct me if it's wrong. Thanks,

> Remove LEASE_{SOFTLIMIT,HARDLIMIT}_PERIOD and unused import from 
> HdfsServerConstants
> 
>
> Key: HDFS-9935
> URL: https://issues.apache.org/jira/browse/HDFS-9935
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-9935.001.patch, HDFS-9935.002.patch
>
>
> In HDFS-9134, it has moved the 
> {{LEASE_SOFTLIMIT_PERIOD}},{{LEASE_HARDLIMIT_PERIOD}} constants from 
> {{HdfsServerConstants}} to {{HdfsConstants}} because these two constants are 
> used by {{DFSClient}} which is moved to {{hadoop-hdfs-client}}. And constants 
> in {{HdfsConstants}} can be both used by client and server side. In addition, 
> I have checked that these two constants in {{HdfsServerConstants}} has 
> already not been used in project now and were all replaced by 
> {{HdfsConstants.LEASE_SOFTLIMIT_PERIOD}},{{HdfsConstants.LEASE_HARDLIMIT_PERIOD}}.
>  So I think we can remove these unused constant values in 
> {{HdfsServerConstants}} completely. Instead of we can use them in 
> {{HdfsConstants}} if we want to use them in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8508) TestJournalNode fails occasionally with bind exception

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806252#comment-15806252
 ] 

Hadoop QA commented on HDFS-8508:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-8508 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12800418/HDFS-8508.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 05e3a298938a 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2977bc6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18078/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18078/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18078/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestJournalNode fails occasionally with bind exception
> --
>
> Key: HDFS-8508
> URL: https://issues.apache.org/jira/browse/HDFS-8508
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test

[jira] [Updated] (HDFS-11273) Move TransferFsImage#doGetUrl function to a Util class

2017-01-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11273:
--
Attachment: HDFS-11273.003.patch

Fixing jenkings errors in v03.

> Move TransferFsImage#doGetUrl function to a Util class
> --
>
> Key: HDFS-11273
> URL: https://issues.apache.org/jira/browse/HDFS-11273
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11273.000.patch, HDFS-11273.001.patch, 
> HDFS-11273.002.patch, HDFS-11273.003.patch
>
>
> TransferFsImage#doGetUrl downloads files from the specified url and stores 
> them in the specified storage location. HDFS-4025 plans to synchronize the 
> log segments in JournalNodes. If a log segment is missing from a JN, the JN 
> downloads it from another JN which has the required log segment. We need 
> TransferFsImage#doGetUrl and TransferFsImage#receiveFile to accomplish this. 
> So we propose to move the said functions to a Utility class so as to be able 
> to use it for JournalNode syncing as well, without duplication of code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11300) HDFS Nameservices introduces DNS latency inter-dependencies between namenodes

2017-01-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-11300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806232#comment-15806232
 ] 

Rémy SAISSY commented on HDFS-11300:


I agree, but it does not fix the dependency issue.

It gives more time to fix a DNS outage as long as the local caches have not 
expired.
But the issue might still arise if you have let's say, the outage starting a 
few moment before the cache local cache expires on some jobs submission gateway 
machines.
Unless you set your cache to never expire and decide to flush them manually 
when you change your namenodes which doesn't occur every day indeed.





> HDFS Nameservices introduces DNS latency inter-dependencies between namenodes
> -
>
> Key: HDFS-11300
> URL: https://issues.apache.org/jira/browse/HDFS-11300
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, hdfs-client
>Affects Versions: 2.6.0
>Reporter: Rémy SAISSY
>
> When using HDFS Nameservices, a DNS outage or strong latency on  one of 
> nameservices will impact any DFSClient or other component which has this 
> nameservice in its hdfs-site.xml. Even though it doesn't use it.
> The issue is due to all nameservices being listed in hdfs-site.xml, and 
> hadoop code trying to resolve all of them upfront.
> To ilustrate the issue, here is the use case we have:
> * the 'prod' cluster with its dedicated DNS
> * the 'preprod' cluster with its dedicated DNS
> * both 'prod' and 'preprod'  hdfs-site.xml files have each other nameservices 
> so that they can use HDFS as follows:
> (from prod) hdfs dfs -ls hdfs://preprod/user/j.doe
> A DNS outage on the preprod cluster slow down production jobs being scheduled 
> because of the upfront resolution of the hdfs-site.xml nameservices entries 
> even though a specific job doesn't use it.
> A solution to fix this issue is to fix the upfront resolution and do it in a 
> lazy way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8944) Make dfsadmin command options case insensitive

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806231#comment-15806231
 ] 

Hadoop QA commented on HDFS-8944:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-8944 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12814966/HDFS-8944.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9453e5da7fbc 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2977bc6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18076/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18076/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18076/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make dfsadmin command options case insensitive
> --
>
> Key: HDFS-8944
> URL: https://issues.apache.org/jira/browse/HDFS-8944
> Project: Hadoop HDFS
>  Issue Type: Improvement

[jira] [Issue Comment Deleted] (HDFS-11273) Move TransferFsImage#doGetUrl function to a Util class

2017-01-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11273:
--
Comment: was deleted

(was: Resolving compilation and checkstyle errors in v03.)

> Move TransferFsImage#doGetUrl function to a Util class
> --
>
> Key: HDFS-11273
> URL: https://issues.apache.org/jira/browse/HDFS-11273
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11273.000.patch, HDFS-11273.001.patch, 
> HDFS-11273.002.patch
>
>
> TransferFsImage#doGetUrl downloads files from the specified url and stores 
> them in the specified storage location. HDFS-4025 plans to synchronize the 
> log segments in JournalNodes. If a log segment is missing from a JN, the JN 
> downloads it from another JN which has the required log segment. We need 
> TransferFsImage#doGetUrl and TransferFsImage#receiveFile to accomplish this. 
> So we propose to move the said functions to a Utility class so as to be able 
> to use it for JournalNode syncing as well, without duplication of code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11273) Move TransferFsImage#doGetUrl function to a Util class

2017-01-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11273:
--
Attachment: (was: HDFS-11273.003.patch)

> Move TransferFsImage#doGetUrl function to a Util class
> --
>
> Key: HDFS-11273
> URL: https://issues.apache.org/jira/browse/HDFS-11273
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11273.000.patch, HDFS-11273.001.patch, 
> HDFS-11273.002.patch
>
>
> TransferFsImage#doGetUrl downloads files from the specified url and stores 
> them in the specified storage location. HDFS-4025 plans to synchronize the 
> log segments in JournalNodes. If a log segment is missing from a JN, the JN 
> downloads it from another JN which has the required log segment. We need 
> TransferFsImage#doGetUrl and TransferFsImage#receiveFile to accomplish this. 
> So we propose to move the said functions to a Utility class so as to be able 
> to use it for JournalNode syncing as well, without duplication of code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11273) Move TransferFsImage#doGetUrl function to a Util class

2017-01-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11273:
--
Attachment: HDFS-11273.003.patch

Resolving compilation and checkstyle errors in v03.

> Move TransferFsImage#doGetUrl function to a Util class
> --
>
> Key: HDFS-11273
> URL: https://issues.apache.org/jira/browse/HDFS-11273
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11273.000.patch, HDFS-11273.001.patch, 
> HDFS-11273.002.patch, HDFS-11273.003.patch
>
>
> TransferFsImage#doGetUrl downloads files from the specified url and stores 
> them in the specified storage location. HDFS-4025 plans to synchronize the 
> log segments in JournalNodes. If a log segment is missing from a JN, the JN 
> downloads it from another JN which has the required log segment. We need 
> TransferFsImage#doGetUrl and TransferFsImage#receiveFile to accomplish this. 
> So we propose to move the said functions to a Utility class so as to be able 
> to use it for JournalNode syncing as well, without duplication of code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10480) Add an admin command to list currently open files

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806202#comment-15806202
 ] 

Hadoop QA commented on HDFS-10480:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
621 unchanged - 2 fixed = 625 total (was 623) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10480 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824136/HDFS-10480-trunk-1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux fd21136c5344 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2977bc6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18054/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 

[jira] [Commented] (HDFS-3570) Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used space

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806198#comment-15806198
 ] 

Hadoop QA commented on HDFS-3570:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 45 unchanged - 0 fixed = 46 total (was 45) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestEncryptionZones |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-3570 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12746476/HDFS-3570.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b9d4a9e6071d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2977bc6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18060/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18060/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18060/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18060/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HDFS-11299) Support multiple Datanode File IO hooks

2017-01-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806199#comment-15806199
 ] 

Arpit Agarwal commented on HDFS-11299:
--

Thanks for this improvement [~hanishakoneru]. Comments below:

# FaultInjectorFileIoEvents#beforeMetadataOp and 
FaultInjectorFileIoEvents#beforeFileIo can return void.
# We can just remove FileIoProvider#getStatistics, also 
FaultInjectorFileIoEvents#getStatistics and 
Datanode#getFileIoProviderStatistics.
# We should keep the DiskChecker invocation out of ProfilingFileIoEvents. One 
way is to add a failure handler in FileIoProvider which invokes diskchecker 
then invokes ProfilingFileIoEvents#onFailure.
# Minor: The config key names can be simplified. e.g. 
{{dfs.datanode.fileio.events.enabled.profiling}} ==> 
{{dfs.datanode.enable.fileio.profiling}}.


Looks good otherwise.

> Support multiple Datanode File IO hooks
> ---
>
> Key: HDFS-11299
> URL: https://issues.apache.org/jira/browse/HDFS-11299
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11299.000.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of choosing only one hook using Config parameters, we want to add two 
> separate hooks - one for profiling and one for fault injection. The fault 
> injection hook will be useful for testing purposes. 
> This jira only introduces support for fault injection hook. The 
> implementation for that will come later on.
> Also, now Default and Counting FileIOEvents would not be needed as we can 
> control enabling the profiling and fault injection hooks using config 
> parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6853) MiniDFSCluster.isClusterUp() should not check if null NameNodes are up

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806192#comment-15806192
 ] 

Hadoop QA commented on HDFS-6853:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-6853 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-6853 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12662523/HDFS-6853.2.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18094/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MiniDFSCluster.isClusterUp() should not check if null NameNodes are up
> --
>
> Key: HDFS-6853
> URL: https://issues.apache.org/jira/browse/HDFS-6853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: James Thomas
>Assignee: James Thomas
> Attachments: HDFS-6853.2.patch, HDFS-6853.patch
>
>
> Suppose we have a two-NN cluster and then shut down one of the NN's (NN0). 
> When we try to restart the other NN (NN1), we wait for isClusterUp() to 
> return true, but this will never happen because NN0 is null and isNameNodeUp 
> returns false for a null NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6450) Support non-positional hedged reads in HDFS

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806191#comment-15806191
 ] 

Hadoop QA commented on HDFS-6450:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-6450 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-6450 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12702940/HDFS-7782-001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18093/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support non-positional hedged reads in HDFS
> ---
>
> Key: HDFS-6450
> URL: https://issues.apache.org/jira/browse/HDFS-6450
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.0
>Reporter: Colin P. McCabe
>Assignee: Liang Xie
> Attachments: HDFS-6450-like-pread.txt
>
>
> HDFS-5776 added support for hedged positional reads.  We should also support 
> hedged non-position reads (aka regular reads).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6526) Implement HDFS TtlManager

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806189#comment-15806189
 ] 

Hadoop QA commented on HDFS-6526:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-6526 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-6526 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12651960/HDFS-6526.1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18092/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement HDFS TtlManager
> -
>
> Key: HDFS-6526
> URL: https://issues.apache.org/jira/browse/HDFS-6526
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode
>Affects Versions: 2.4.0
>Reporter: Zesheng Wu
>Assignee: Zesheng Wu
> Attachments: HDFS-6526.1.patch
>
>
> This issue is used to track development of HDFS TtlManager, for details see 
> HDFS-6382.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9011) Support splitting BlockReport of a storage into multiple RPC

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806187#comment-15806187
 ] 

Hadoop QA commented on HDFS-9011:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-9011 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-9011 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12754275/HDFS-9011.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18091/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support splitting BlockReport of a storage into multiple RPC
> 
>
> Key: HDFS-9011
> URL: https://issues.apache.org/jira/browse/HDFS-9011
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-9011.000.patch, HDFS-9011.001.patch, 
> HDFS-9011.002.patch
>
>
> Currently if a DataNode has too many blocks (more than 1m by default), it 
> sends multiple RPC to the NameNode for the block report, each RPC contains 
> report for a single storage. However, in practice we've seen sometimes even a 
> single storage can contains large amount of blocks and the report even 
> exceeds the max RPC data length. It may be helpful to support sending 
> multiple RPC for the block report of a storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-6555) Specify file encryption attributes at create time

2017-01-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-6555.
---
Resolution: Not A Problem

I think we can resolve this since crypto info is passed around as an xattr. 
With /.reserved/raw, we support distcp without decryption.

> Specify file encryption attributes at create time
> -
>
> Key: HDFS-6555
> URL: https://issues.apache.org/jira/browse/HDFS-6555
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>
> We need to create a Crypto Blob for passing around crypto info. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11302) Improve Logging for SSLHostnameVerifier

2017-01-06 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-11302:
-
Target Version/s: 2.8.0

> Improve Logging for SSLHostnameVerifier
> ---
>
> Key: HDFS-11302
> URL: https://issues.apache.org/jira/browse/HDFS-11302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
>
> SSLHostnameVerifier interface/class was copied from other projects without 
> any logging to help troubleshooting SSL certificate related issues. For a 
> misconfigured SSL truststore, we may get some very confusing error message 
> like
> {code}
> >hdfs dfs -cat swebhdfs://NNl/tmp/test1.txt
> ...
> cause:java.io.IOException: DN2:50475: HTTPS hostname wrong:  should be 
> cat: DN2:50475: HTTPS hostname wrong:  should be 
> {code}
> This ticket is opened to add tracing to give more useful context information 
> around SSL certificate verification failures inside the following code.
> {code}AbstractVerifier#check(String[] host, X509Certificate cert) {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10358) Refactor EncryptionZone and EncryptionZoneInt

2017-01-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10358:
---
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-6891)

> Refactor EncryptionZone and EncryptionZoneInt
> -
>
> Key: HDFS-10358
> URL: https://issues.apache.org/jira/browse/HDFS-10358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> Between class {{EncryptionZone}} and {{EncryptionZoneInt}}, they can reuse  
> most of fields in {{EncryptionZoneInt}}. We can refactor them to improve that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6668) Add and expose KeyProvider-related metrics on NameNode

2017-01-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6668:
--
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-6891)

> Add and expose KeyProvider-related metrics on NameNode
> --
>
> Key: HDFS-6668
> URL: https://issues.apache.org/jira/browse/HDFS-6668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>
> Would be good to expose the health of the KeyProvider on the NameNode, e.g. 
> request latency, # of failures, # ops, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6555) Specify file encryption attributes at create time

2017-01-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6555:
--
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-6891)

> Specify file encryption attributes at create time
> -
>
> Key: HDFS-6555
> URL: https://issues.apache.org/jira/browse/HDFS-6555
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>
> We need to create a Crypto Blob for passing around crypto info. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11302) Improve Logging for SSLHostnameVerifier

2017-01-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806156#comment-15806156
 ] 

Mingliang Liu commented on HDFS-11302:
--

According to the pain in one of our support cases, I strongly +1 on this 
proposal. Thanks Xiaoyu.

> Improve Logging for SSLHostnameVerifier
> ---
>
> Key: HDFS-11302
> URL: https://issues.apache.org/jira/browse/HDFS-11302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
>Priority: Minor
>
> SSLHostnameVerifier interface/class was copied from other projects without 
> any logging to help troubleshooting SSL certificate related issues. For a 
> misconfigured SSL truststore, we may get some very confusing error message 
> like
> {code}
> >hdfs dfs -cat swebhdfs://NNl/tmp/test1.txt
> ...
> cause:java.io.IOException: DN2:50475: HTTPS hostname wrong:  should be 
> cat: DN2:50475: HTTPS hostname wrong:  should be 
> {code}
> This ticket is opened to add tracing to give more useful context information 
> around SSL certificate verification failures inside the following code.
> {code}AbstractVerifier#check(String[] host, X509Certificate cert) {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6971) Bounded staleness of EDEK caches on the NN

2017-01-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6971:
--
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-6891)

> Bounded staleness of EDEK caches on the NN
> --
>
> Key: HDFS-6971
> URL: https://issues.apache.org/jira/browse/HDFS-6971
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Andrew Wang
>Assignee: Zhe Zhang
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6971-20140930-v1.patch
>
>
> The EDEK cache on the NN can hold onto keys after the admin has rolled the 
> key. It'd be good to time-bound the caches, perhaps also providing an 
> explicit "flush" command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11182) Update DataNode to use DatasetVolumeChecker

2017-01-06 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806132#comment-15806132
 ] 

Xiaoyu Yao commented on HDFS-11182:
---

Thanks [~arpitagarwal] for working on the branch-2 patch. It looks good to me. 
+1

> Update DataNode to use DatasetVolumeChecker
> ---
>
> Key: HDFS-11182
> URL: https://issues.apache.org/jira/browse/HDFS-11182
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11182-branch-2.01.patch
>
>
> Update DataNode to use the DatasetVolumeChecker class introduced in 
> HDFS-11149 to parallelize disk checks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9391) Update webUI/JMX to display maintenance state info

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806116#comment-15806116
 ] 

Hadoop QA commented on HDFS-9391:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 276 unchanged - 1 fixed = 276 total (was 277) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-9391 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846084/HDFS-9391.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a930eade1737 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2977bc6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18052/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18052/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18052/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update webUI/JMX to display maintenance state info
> --
>
> Key: HDFS-9391
> URL: https://issues.apache.org/jira/browse/HDFS-9391
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 

[jira] [Created] (HDFS-11302) Improve Logging for SSLHostnameVerifier

2017-01-06 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-11302:
-

 Summary: Improve Logging for SSLHostnameVerifier
 Key: HDFS-11302
 URL: https://issues.apache.org/jira/browse/HDFS-11302
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Reporter: Xiaoyu Yao
Assignee: Chen Liang
Priority: Minor


SSLHostnameVerifier interface/class was copied from other projects without any 
logging to help troubleshooting SSL certificate related issues. For a 
misconfigured SSL truststore, we may get some very confusing error message like

{code}
>hdfs dfs -cat swebhdfs://NNl/tmp/test1.txt
...
cause:java.io.IOException: DN2:50475: HTTPS hostname wrong:  should be 
cat: DN2:50475: HTTPS hostname wrong:  should be 
{code}

This ticket is opened to add tracing to give more useful context information 
around SSL certificate verification failures inside the following code.

{code}AbstractVerifier#check(String[] host, X509Certificate cert) {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11259) Update fsck to display maintenance state info

2017-01-06 Thread Dilaver (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806030#comment-15806030
 ] 

Dilaver commented on HDFS-11259:


Thanks for the change Manoj. LGTM.
Some nits around test polling and timeouts:
{quote}
{code}
+do {
+  Thread.sleep(2000);
+  for (DatanodeInfo info : dfs.getDataNodeStats()) {
+if (dnName.equals(info.getXferAddr())) {
+  datanodeInfo = info;
+}
+  }
+} while (datanodeInfo != null && !datanodeInfo.isInMaintenance());
{code}
{quote}
- please check if something like {{GenericTestUtils#waitFor}} can be used where 
the state is being polled, or if refactoring them into something like 
"waitForInMaintenance" makes sense
- consider sleeping less than 2sec while polling
- also consider adding timeouts to test cases

> Update fsck to display maintenance state info
> -
>
> Key: HDFS-11259
> URL: https://issues.apache.org/jira/browse/HDFS-11259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11259.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8694) Expose the stats of IOErrors on each FsVolume through JMX

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806019#comment-15806019
 ] 

Hadoop QA commented on HDFS-8694:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-8694 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-8694 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12743409/HDFS-8694.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18085/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expose the stats of IOErrors on each FsVolume through JMX
> -
>
> Key: HDFS-8694
> URL: https://issues.apache.org/jira/browse/HDFS-8694
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-8694.000.patch, HDFS-8694.001.patch
>
>
> Currently, once DataNode hits an {{IOError}} when writing / reading block 
> files, it starts a background {{DiskChecker.checkDirs()}} thread. But if this 
> thread successfully finishes, DN does not record this {{IOError}}. 
> We need one measurement to count all {{IOErrors}} for each volume.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7527) TestDecommission.testIncludeByRegistrationName fails occassionally in trunk

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806018#comment-15806018
 ] 

Hadoop QA commented on HDFS-7527:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-7527 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-7527 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12687971/HDFS-7527.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18084/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestDecommission.testIncludeByRegistrationName fails occassionally in trunk
> ---
>
> Key: HDFS-7527
> URL: https://issues.apache.org/jira/browse/HDFS-7527
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Reporter: Yongjun Zhang
>Assignee: Binglin Chang
> Attachments: HDFS-7527.001.patch, HDFS-7527.002.patch
>
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/
> {quote}
> Error Message
> test timed out after 36 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 36 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName(TestDecommission.java:957)
> 2014-12-15 12:00:19,958 ERROR datanode.DataNode 
> (BPServiceActor.java:run(836)) - Initialization failed for Block pool 
> BP-887397778-67.195.81.153-1418644469024 (Datanode Uuid null) service to 
> localhost/127.0.0.1:40565 Datanode denied communication with namenode because 
> the host is not in the include-list: DatanodeRegistration(127.0.0.1, 
> datanodeUuid=55d8cbff-d8a3-4d6d-ab64-317fff0ee279, infoPort=54318, 
> infoSecurePort=0, ipcPort=43726, 
> storageInfo=lv=-56;cid=testClusterID;nsid=903754315;c=0)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:915)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4402)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1196)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26296)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:966)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2127)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2123)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2121)
> 2014-12-15 12:00:29,087 FATAL datanode.DataNode 
> (BPServiceActor.java:run(841)) - Initialization failed for Block pool 
> BP-887397778-67.195.81.153-1418644469024 (Datanode Uuid null) service to 
> localhost/127.0.0.1:40565. Exiting. 
> java.io.IOException: DN shut down before block pool connected
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.retrieveNamespaceInfo(BPServiceActor.java:186)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:216)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:829)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}
> Found by tool proposed in HADOOP-11045:
> {quote}
> [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
> Hadoop-Hdfs-trunk -n 5 | tee bt.log
> Recently FAILED builds in url: 
> https://builds.apache.org//job/Hadoop-Hdfs-trunk
> THERE ARE 4 builds (out of 6) that have failed tests in the past 5 days, 
> as listed below:
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport 
> (2014-12-15 03:30:01)
> Failed test: 
> org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
> Failed test: 
> 

[jira] [Commented] (HDFS-9586) listCorruptFileBlocks should not output files that all replications are decommissioning

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806017#comment-15806017
 ] 

Hadoop QA commented on HDFS-9586:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-9586 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-9586 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778829/9586-v1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18083/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> listCorruptFileBlocks should not output files that all replications are 
> decommissioning
> ---
>
> Key: HDFS-9586
> URL: https://issues.apache.org/jira/browse/HDFS-9586
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: 9586-v1.patch
>
>
> As HDFS-7933 said, we should count decommissioning and decommissioned nodes 
> respectively and regard decommissioning nodes as special live nodes whose 
> file is not corrupt or missing.
> So in listCorruptFileBlocks which is used by fsck and HDFS namenode website, 
> we should collect a corrupt file only if liveReplicas and decommissioning are 
> both 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8976) Create HTML5 cluster webconsole for federated cluster

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806013#comment-15806013
 ] 

Hadoop QA commented on HDFS-8976:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-8976 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12755403/HDFS-8976-03.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux a61f5781189a 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2977bc6 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18081/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Create HTML5 cluster webconsole for federated cluster
> -
>
> Key: HDFS-8976
> URL: https://issues.apache.org/jira/browse/HDFS-8976
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.7.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-8976-01.patch, HDFS-8976-02.patch, 
> HDFS-8976-03.patch, cluster-health.JPG
>
>
> Since the old jsp variant of cluster web console is no longer present from 
> 2.7 onwards, there is a need for HTML 5 web console for overview of overall 
> cluster.
> 2.7.1 docs says to check webconsole as below {noformat}Similar to the 
> Namenode status web page, when using federation a Cluster Web Console is 
> available to monitor the federated cluster at 
> http:///dfsclusterhealth.jsp. Any Namenode in the cluster 
> can be used to access this web page.{noformat}
> But this is no longer present,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6914) Resolve huge memory consumption Issue with OIV processing PB-based fsimages

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15806015#comment-15806015
 ] 

Hadoop QA commented on HDFS-6914:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-6914 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-6914 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12671201/HDFS-6914.v2.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18082/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Resolve huge memory consumption Issue with OIV processing PB-based fsimages
> ---
>
> Key: HDFS-6914
> URL: https://issues.apache.org/jira/browse/HDFS-6914
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Hao Chen
>  Labels: hdfs
> Attachments: HDFS-6914.patch, HDFS-6914.v2.patch
>
>
> For better managing and supporting a lot of large hadoop clusters in 
> production, we internally need to automatically export fsimage to delimited 
> text files in LSR style and then analyse with hive or pig or build system 
> metrics for real-time analyzing. 
> However  due to the internal layout changes introduced by the protobuf-based 
> fsimage, OIV processing program consumes excessive amount of memory. For 
> example, in order to export the fsimage in size of 8GB, it should have taken 
> about 85GB memory which is really not reasonable and impacted performance of 
> other services badly in the same server.
> To resolve above problem, I submit this patch which will reduce memory 
> consumption of OIV LSR processing by 50%.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11301) Double wrapping over RandomAccessFile in LocalReplicaInPipeline#createStreams

2017-01-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805995#comment-15805995
 ] 

Arpit Agarwal commented on HDFS-11301:
--

+1 pending Jenkins.

> Double wrapping over RandomAccessFile in LocalReplicaInPipeline#createStreams
> -
>
> Key: HDFS-11301
> URL: https://issues.apache.org/jira/browse/HDFS-11301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-11301.000.patch
>
>
> In LocalReplicaInPipeline#createStreams, there is a WrappedFileOutputStream 
> created over a WrappedRandomAccessFile. This double layer of instrumentation 
> is unnecessary. 
> {quote}
>   blockOut = fileIoProvider.getFileOutputStream(getVolume(),
>   fileIoProvider.getRandomAccessFile(getVolume(), blockFile, "rw")
>   .getFD());
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11301) Double wrapping over RandomAccessFile in LocalReplicaInPipeline#createStreams

2017-01-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11301:
--
Attachment: HDFS-11301.000.patch

Removed the double layer of instrumentation by passing a RandomAccessFile 
instead of a WrappedRandomAccessFile to getFileOutputStream.

> Double wrapping over RandomAccessFile in LocalReplicaInPipeline#createStreams
> -
>
> Key: HDFS-11301
> URL: https://issues.apache.org/jira/browse/HDFS-11301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-11301.000.patch
>
>
> In LocalReplicaInPipeline#createStreams, there is a WrappedFileOutputStream 
> created over a WrappedRandomAccessFile. This double layer of instrumentation 
> is unnecessary. 
> {quote}
>   blockOut = fileIoProvider.getFileOutputStream(getVolume(),
>   fileIoProvider.getRandomAccessFile(getVolume(), blockFile, "rw")
>   .getFD());
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11301) Double wrapping over RandomAccessFile in LocalReplicaInPipeline#createStreams

2017-01-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11301:
--
Status: Patch Available  (was: Open)

> Double wrapping over RandomAccessFile in LocalReplicaInPipeline#createStreams
> -
>
> Key: HDFS-11301
> URL: https://issues.apache.org/jira/browse/HDFS-11301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-11301.000.patch
>
>
> In LocalReplicaInPipeline#createStreams, there is a WrappedFileOutputStream 
> created over a WrappedRandomAccessFile. This double layer of instrumentation 
> is unnecessary. 
> {quote}
>   blockOut = fileIoProvider.getFileOutputStream(getVolume(),
>   fileIoProvider.getRandomAccessFile(getVolume(), blockFile, "rw")
>   .getFD());
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6660) Use int instead of object reference to DatanodeStorageInfo in BlockInfo triplets,

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805983#comment-15805983
 ] 

Hadoop QA commented on HDFS-6660:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-6660 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-6660 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12666461/HDFS-6660.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18080/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use int instead of object reference to DatanodeStorageInfo in BlockInfo 
> triplets,
> -
>
> Key: HDFS-6660
> URL: https://issues.apache.org/jira/browse/HDFS-6660
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.4.1
>Reporter: Amir Langer
>Assignee: Amir Langer
>  Labels: performance
> Attachments: HDFS-6660.patch
>
>
> Map an int index to every DatanodeStorageInfo and use it instead of object 
> reference in the BlockInfo triplets data structure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2017-01-06 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805981#comment-15805981
 ] 

Xiaoyu Yao commented on HDFS-10930:
---

Looks good to me. Thanks [~arpitagarwal] for taking care of the branch-2 
patch/commit.

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10930-branch-2.00.patch, 
> HDFS-10930-branch-2.001.patch, HDFS-10930-branch-2.002.patch, 
> HDFS-10930.01.patch, HDFS-10930.02.patch, HDFS-10930.03.patch, 
> HDFS-10930.04.patch, HDFS-10930.05.patch, HDFS-10930.06.patch, 
> HDFS-10930.07.patch, HDFS-10930.08.patch, HDFS-10930.09.patch, 
> HDFS-10930.10.patch, HDFS-10930.11.patch, HDFS-10930.barnch-2.00.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, FsVolumeImpl.java, 
> DirectoryScanner.java, BlockScanner.java, FsDatasetAsyncDiskService.java, 
> LocalReplica.java, LocalReplicaPipeline.java, Storage.java, etc. 
> This ticket is opened to consolidate IO related operations for easy 
> instrumentation, metrics collection, logging and trouble shooting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11299) Support multiple Datanode File IO hooks

2017-01-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11299:
--
Status: Patch Available  (was: Open)

> Support multiple Datanode File IO hooks
> ---
>
> Key: HDFS-11299
> URL: https://issues.apache.org/jira/browse/HDFS-11299
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11299.000.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of choosing only one hook using Config parameters, we want to add two 
> separate hooks - one for profiling and one for fault injection. The fault 
> injection hook will be useful for testing purposes. 
> This jira only introduces support for fault injection hook. The 
> implementation for that will come later on.
> Also, now Default and Counting FileIOEvents would not be needed as we can 
> control enabling the profiling and fault injection hooks using config 
> parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805969#comment-15805969
 ] 

Hadoop QA commented on HDFS-6310:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-6310 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-6310 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12642632/HDFS-6310.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18075/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> PBImageXmlWriter should output information about Delegation Tokens
> --
>
> Key: HDFS-6310
> URL: https://issues.apache.org/jira/browse/HDFS-6310
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.4.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: HDFS-6310.patch
>
>
> Separated from HDFS-6293.
> The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
> option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8068) Do not retry rpc calls If the proxy contains unresolved address

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805970#comment-15805970
 ] 

Hadoop QA commented on HDFS-8068:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-8068 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-8068 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12723657/HDFS-8068.v2.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18077/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Do not retry rpc calls If the proxy contains unresolved address
> ---
>
> Key: HDFS-8068
> URL: https://issues.apache.org/jira/browse/HDFS-8068
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-8068.v1.patch, HDFS-8068.v2.patch
>
>
> When the InetSocketAddress object happens to be unresolvable (e.g. due to 
> transient DNS issue), the rpc proxy object will not be usable since the 
> client will throw UnknownHostException when a Connection object is created. 
> If FailoverOnNetworkExceptionRetry is used as in the standard HA failover 
> proxy, the call will be retried, but this will never recover.  Instead, the 
> validity of address must be checked on pxoy creation and throw if it is 
> invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805968#comment-15805968
 ] 

Hadoop QA commented on HDFS-8697:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-8697 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-8697 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12745524/HDFS-8697.01.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18074/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Refactor DecommissionManager: more generic method names and misc cleanup
> 
>
> Key: HDFS-8697
> URL: https://issues.apache.org/jira/browse/HDFS-8697
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-8697.00.patch, HDFS-8697.01.patch
>
>
> This JIRA merges the changes in {{DecommissionManager}} from the HDFS-7285 
> branch, including changing a few method names to be more generic 
> ({{replicated}} -> {{stored}}), and some cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11299) Support multiple Datanode File IO hooks

2017-01-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11299:
--
Attachment: HDFS-11299.000.patch

> Support multiple Datanode File IO hooks
> ---
>
> Key: HDFS-11299
> URL: https://issues.apache.org/jira/browse/HDFS-11299
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11299.000.patch
>
>
> HDFS-10958 introduces instrumentation hooks around DataNode disk IO and 
> HDFS-10959 adds support for profiling hooks to expose latency statistics. 
> Instead of choosing only one hook using Config parameters, we want to add two 
> separate hooks - one for profiling and one for fault injection. The fault 
> injection hook will be useful for testing purposes. 
> This jira only introduces support for fault injection hook. The 
> implementation for that will come later on.
> Also, now Default and Counting FileIOEvents would not be needed as we can 
> control enabling the profiling and fault injection hooks using config 
> parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8088) Reduce the number of HTrace spans generated by HDFS reads

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805967#comment-15805967
 ] 

Hadoop QA commented on HDFS-8088:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-8088 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-8088 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12724103/HDFS-8088.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18073/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce the number of HTrace spans generated by HDFS reads
> -
>
> Key: HDFS-8088
> URL: https://issues.apache.org/jira/browse/HDFS-8088
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Attachments: HDFS-8088.001.patch
>
>
> HDFS generates too many trace spans on read right now.  Every call to read() 
> we make generates its own span, which is not very practical for things like 
> HBase or Accumulo that do many such reads as part of a single operation.  
> Instead of tracing every call to read(), we should only trace the cases where 
> we refill the buffer inside a BlockReader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6813) WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable with thead-safe.

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805965#comment-15805965
 ] 

Hadoop QA commented on HDFS-6813:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-6813 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-6813 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12659533/HDFS-6813.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18072/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> WebHdfsFileSystem#OffsetUrlInputStream should implement PositionedReadable 
> with thead-safe.
> ---
>
> Key: HDFS-6813
> URL: https://issues.apache.org/jira/browse/HDFS-6813
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-6813.001.patch
>
>
> {{PositionedReadable}} definition requires the implementations for its 
> interfaces should be thread-safe.
> OffsetUrlInputStream(WebHdfsFileSystem inputstream) doesn't implement these 
> interfaces with tread-safe, this JIRA is to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5875) Add iterator support to INodesInPath

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805960#comment-15805960
 ] 

Hadoop QA commented on HDFS-5875:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-5875 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-5875 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12626789/HDFS-5875.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18069/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add iterator support to INodesInPath
> 
>
> Key: HDFS-5875
> URL: https://issues.apache.org/jira/browse/HDFS-5875
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-5875.patch
>
>
> "Resolve as you go" inode iteration will help with the implementation of 
> alternative locking schemes.  It will also be the pre-cursor for resolving 
> paths once and only once per operation, as opposed to ~3 times/call.
> This is an incremental and compatible change for IIP that does not break 
> existing callers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9884) Use doxia macro to generate in-page TOC of HDFS site documentation

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805963#comment-15805963
 ] 

Hadoop QA commented on HDFS-9884:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-9884 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-9884 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12812134/HDFS-9884.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18071/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use doxia macro to generate in-page TOC of HDFS site documentation
> --
>
> Key: HDFS-9884
> URL: https://issues.apache.org/jira/browse/HDFS-9884
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HDFS-9884.001.patch, HDFS-9884.002.patch
>
>
> Since maven-site-plugin 3.5 was released, we can use toc macro in Markdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6939) Support path-based filtering of inotify events

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805961#comment-15805961
 ] 

Hadoop QA commented on HDFS-6939:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-6939 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-6939 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12751697/HDFS-6939-001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18070/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support path-based filtering of inotify events
> --
>
> Key: HDFS-6939
> URL: https://issues.apache.org/jira/browse/HDFS-6939
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client, namenode, qjm
>Reporter: James Thomas
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-6939-001.patch
>
>
> Users should be able to specify that they only want events involving 
> particular paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6308) TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805956#comment-15805956
 ] 

Hadoop QA commented on HDFS-6308:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-6308 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-6308 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12642622/HDFS-6308.v1.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18067/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky
> 
>
> Key: HDFS-6308
> URL: https://issues.apache.org/jira/browse/HDFS-6308
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-6308.v1.patch
>
>
> Found this on pre-commit build of HDFS-6261
> {code}
> java.lang.AssertionError: Expected one valid and one invalid volume
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testGetFileBlockStorageLocationsError(TestDistributedFileSystem.java:837)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6782) Improve FS editlog logSync

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805959#comment-15805959
 ] 

Hadoop QA commented on HDFS-6782:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-6782 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-6782 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12659525/HDFS-6782.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18068/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve FS editlog logSync
> --
>
> Key: HDFS-6782
> URL: https://issues.apache.org/jira/browse/HDFS-6782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.4.1
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HDFS-6782.001.patch, HDFS-6782.002.patch
>
>
> In NN, it uses a double buffer (bufCurrent, bufReady) for log sync, 
> bufCurrent it to buffer new coming edit ops and bufReady is for flushing. 
> This's efficient. When flush is ongoing, and bufCurrent is full, NN goes to 
> force log sync, and all new Ops are blocked (since force log sync is 
> protected by FSNameSystem write lock). After the flush finished, the new Ops 
> are still blocked, but actually at this time, bufCurrent is free and Ops can 
> go ahead and write to the buffer. The following diagram shows the detail. 
> This JIRA is for this improvement.  Thanks [~umamaheswararao] for confirming 
> this issue.
> {code}
> edit1(txid1) -- write to bufCurrent  logSync - (swap 
> buffer)flushing ---
> edit2(txid2) -- write to bufCurrent  logSync - waiting 
> ---
> edit3(txid3) -- write to bufCurrent  logSync - waiting 
> ---
> edit4(txid4) -- write to bufCurrent  logSync - waiting 
> ---
> edit5(txid5) -- write to bufCurrent --full-- force sync - waiting 
> ---
> edit6(txid6) -- blocked
> ...
> editn(txidn) -- blocked
> {code}
> After the flush, it becomes
> {code}
> edit1(txid1) -- write to bufCurrent  logSync - finished 
> 
> edit2(txid2) -- write to bufCurrent  logSync - flushing 
> ---
> edit3(txid3) -- write to bufCurrent  logSync - waiting 
> ---
> edit4(txid4) -- write to bufCurrent  logSync - waiting 
> ---
> edit5(txid5) -- write to bufCurrent --full-- force sync - waiting 
> ---
> edit6(txid6) -- blocked
> ...
> editn(txidn) -- blocked
> {code}
> After edit1 finished, bufCurrent is free, and the thread which flushes txid2 
> will also flushes txid3-5, so we should return from the force sync of edit5 
> and FSNamesystem write lock will be freed (Don't worry that edit5 Op will 
> return, since there will be a normal logSync after the force logSync and 
> there will wait for sync finished). This is the idea of this JIRA. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9558) Replication requests from datanode always blames the source datanode in case of Checksum Exception.

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9558:
-
Target Version/s:   (was: 2.8.0)

> Replication requests from datanode always blames the source datanode in case 
> of Checksum Exception.
> ---
>
> Key: HDFS-9558
> URL: https://issues.apache.org/jira/browse/HDFS-9558
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Rushabh S Shah
>
> Replication requests from datanode (in case of rack failure event) always 
> blames the source datanode if any of the downstream nodes encounters 
> ChecksumException.
> We saw this case recently in our cluster.
> We lost  7 nodes in a rack.
> There was only one replica of the block (say on dnA).
> The namenode asks dnA to replicate to dnB and dnC.
> {noformat}
> 2015-12-13 21:09:41,798 [DataNode:   heartbeating to NN:8020] INFO 
> datanode.DataNode: DatanodeRegistration(dnA, 
> datanodeUuid=bc1f183d-b74a-49c9-ab1a-d1d496ab77e9, infoPort=1006, 
> infoSecurePort=0, ipcPort=8020, 
> storageInfo=lv=-56;cid=CID-e7f736ac-158e-446e-9091-7e66f3cddf3c;nsid=358250775;c=1428471998571)
>  Starting thread to transfer 
> BP-1620678153--1351096255769:blk_3065507810_1107476861617 to dnB:1004 
> dnC:1004 
> {noformat}
> All the packets going out from dnB's interface were getting corrupted.
> So dnC  received corrupt block and it reported bad block (from dnA) to 
> namenode.
> Following are the logs from dnC:
> {noformat}
> 2015-12-13 21:09:43,444 [DataXceiver for client  at /dnB:34879 [Receiving 
> block BP-1620678153--1351096255769:blk_3065507810_1107476861617]] WARN 
> datanode.DataNode: Checksum error in block 
> BP-1620678153--1351096255769:blk_3065507810_1107476861617 from /dnB:34879
> org.apache.hadoop.fs.ChecksumException: Checksum error:  at 58368 exp: 
> -1657951272 got: 856104973
> at 
> org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native 
> Method)
> at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSumsByteArray(NativeCrc32.java:69)
> at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:347)
> at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:294)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:416)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:550)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:853)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:761)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:237)
> at java.lang.Thread.run(Thread.java:745)
> 2015-12-13 21:09:43,445 [DataXceiver for client  at dnB:34879 [Receiving 
> block BP-1620678153--1351096255769:blk_3065507810_1107476861617]] INFO 
> datanode.DataNode: report corrupt 
> BP-1620678153--1351096255769:blk_3065507810_1107476861617 from datanode 
> dnA:1004 to namenode
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9020) Support hflush/hsync in WebHDFS

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9020:
-
Target Version/s:   (was: 2.8.0)

> Support hflush/hsync in WebHDFS
> ---
>
> Key: HDFS-9020
> URL: https://issues.apache.org/jira/browse/HDFS-9020
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Chris Douglas
> Attachments: HDFS-9020-alt.txt
>
>
> In the current implementation, hflush/hsync have no effect on WebHDFS 
> streams, particularly w.r.t. visibility to other clients. This proposes to 
> extend the protocol and implementation to enable this functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11246) FSNameSystem#logAuditEvent should be called outside the read or write locks during operations like getContentSummary

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-11246:
--
Target Version/s:   (was: 2.8.0)

> FSNameSystem#logAuditEvent should be called outside the read or write locks 
> during operations like getContentSummary
> 
>
> Key: HDFS-11246
> URL: https://issues.apache.org/jira/browse/HDFS-11246
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>
> {code}
> readLock();
> boolean success = true;
> ContentSummary cs;
> try {
>   checkOperation(OperationCategory.READ);
>   cs = FSDirStatAndListingOp.getContentSummary(dir, src);
> } catch (AccessControlException ace) {
>   success = false;
>   logAuditEvent(success, operationName, src);
>   throw ace;
> } finally {
>   readUnlock(operationName);
> }
> {code}
> It would be nice to have audit logging outside the lock esp. in scenarios 
> where applications hammer a given operation several times. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6661) Use BlockList instead of double linked list i.e BlockInfo triplets

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6661:
-
Target Version/s:   (was: 2.8.0)

> Use BlockList instead of double linked list  i.e BlockInfo triplets 
> 
>
> Key: HDFS-6661
> URL: https://issues.apache.org/jira/browse/HDFS-6661
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 2.4.1
>Reporter: Amir Langer
>Assignee: Amir Langer
> Attachments: 
> 0003-use-BlockList-instead-of-a-double-linked-list-of-blo.patch
>
>
> Replace the triplets data structure with a BlockList instance per 
> DatanodeStorageInfo. This requires to keep only two integers per replica in 
> any BlockInfo object. (One is the index for the DatanodeStorageInfo and the 
> other is the position of the block in that DatanodeStorageInfo BlockList).
>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10506) OIV's ReverseXML processor cannot reconstruct some snapshot details

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-10506:
--
Target Version/s:   (was: 2.8.0)

> OIV's ReverseXML processor cannot reconstruct some snapshot details
> ---
>
> Key: HDFS-10506
> URL: https://issues.apache.org/jira/browse/HDFS-10506
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Colin P. McCabe
>
> OIV's ReverseXML processor cannot reconstruct some snapshot details.  
> Specifically,  should contain a  and  field, 
> but does not.   should contain a  field.  OIV also 
> needs to be changed to emit these fields into the XML (they are currently 
> missing).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11301) Double wrapping over RandomAccessFile in LocalReplicaInPipeline#createStreams

2017-01-06 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-11301:
-

 Summary: Double wrapping over RandomAccessFile in 
LocalReplicaInPipeline#createStreams
 Key: HDFS-11301
 URL: https://issues.apache.org/jira/browse/HDFS-11301
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
Priority: Minor


In LocalReplicaInPipeline#createStreams, there is a WrappedFileOutputStream 
created over a WrappedRandomAccessFile. This double layer of instrumentation is 
unnecessary. 
{quote}
  blockOut = fileIoProvider.getFileOutputStream(getVolume(),
  fileIoProvider.getRandomAccessFile(getVolume(), blockFile, "rw")
  .getFD());
{quote}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11298) Add storage policy info in FileStatus

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805940#comment-15805940
 ] 

Hadoop QA commented on HDFS-11298:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 36s{color} | {color:orange} root: The patch generated 10 new + 260 unchanged 
- 0 fixed = 270 total (was 260) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
30s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits 
|
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11298 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846064/HDFS-11298.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8c4f48743176 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2977bc6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18051/artifact/patchprocess/diff-checkstyle-root.txt
 |
| whitespace | 

[jira] [Updated] (HDFS-7304) TestFileCreation#testOverwriteOpenForWrite hangs

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-7304:
-
Target Version/s: 2.9.0  (was: 2.8.0, 2.9.0)

> TestFileCreation#testOverwriteOpenForWrite hangs
> 
>
> Key: HDFS-7304
> URL: https://issues.apache.org/jira/browse/HDFS-7304
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Akira Ajisaka
> Attachments: HDFS-7304.patch, HDFS-7304.patch
>
>
> The test case times out. It has been observed in multiple pre-commit builds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10770) Namenode shouldn't change storage info if client reports different storage. (via reportBadBlock)

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-10770:
--
Target Version/s:   (was: 2.8.0)

> Namenode shouldn't change storage info if client reports different storage. 
> (via reportBadBlock)
> 
>
> Key: HDFS-10770
> URL: https://issues.apache.org/jira/browse/HDFS-10770
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Rushabh S Shah
>
> From HDFS-10348 [comment | 
> https://issues.apache.org/jira/browse/HDFS-10348?focusedCommentId=15271522=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15271522].
> This jira is created to discuss whether we should change the storageInfo in 
> triplets if the client reports bad block with storage which is different than 
> what the namenode thinks.
> Here is the relevant code:
> {code:title=DatanodeStorageInfo.java|borderStyle=solid}
> public AddBlockResult addBlock(BlockInfo b, Block reportedBlock) {
> // First check whether the block belongs to a different storage
> // on the same DN.
> AddBlockResult result = AddBlockResult.ADDED;
> DatanodeStorageInfo otherStorage =
> b.findStorageInfo(getDatanodeDescriptor());
> if (otherStorage != null) {
>   if (otherStorage != this) {
> // The block belongs to a different storage. Remove it first.
> otherStorage.removeBlock(b);
> result = AddBlockResult.REPLACED;
>   } else {
> // The block is already associated with this storage.
> return AddBlockResult.ALREADY_EXIST;
>   }
> }
> b.addStorage(this, reportedBlock);
> blocks.add(b);
> return result;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11090) Leave safemode immediately if all blocks have reported in

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-11090:
--
Target Version/s:   (was: 2.8.0)

> Leave safemode immediately if all blocks have reported in
> -
>
> Key: HDFS-11090
> URL: https://issues.apache.org/jira/browse/HDFS-11090
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yiqun Lin
> Attachments: HDFS-11090.001.patch
>
>
> Startup safemode is triggered by two thresholds: % blocks reported in, and 
> min # datanodes. It's extended by an interval (default 30s) until these two 
> thresholds are met.
> Safemode extension is helpful when the cluster has data, and the default % 
> blocks threshold (0.99) is used. It gives DNs a little extra time to report 
> in and thus avoid unnecessary replication work.
> However, we can leave startup safemode early if 100% of blocks have reported 
> in.
> Note that operators sometimes change the % blocks threshold to > 1 to never 
> automatically leave safemode. We should maintain this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11260) Slow writer threads are not stopped

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-11260:
--
Target Version/s: 3.0.0-beta1  (was: 2.8.0, 3.0.0-beta1)

> Slow writer threads are not stopped
> ---
>
> Key: HDFS-11260
> URL: https://issues.apache.org/jira/browse/HDFS-11260
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.0
> Environment: CDH5.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> If a DataNode receives a transferred block, it tries to stop writer to the 
> same block. However, this may not work, and we saw the following error 
> message and stacktrace.
> Fundamentally, the assumption of {{ReplicaInPipeline#stopWriter}} is wrong. 
> It assumes the writer thread must be a DataXceiver thread, which it can be 
> interrupted and terminates afterwards. However, IPC threads may also be the 
> writer thread by calling initReplicaRecovery, and which ignores interrupt and 
> do not terminate. 
> {noformat}
> 2016-12-16 19:58:56,167 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Join on writer thread Thread[IPC Server handler 6 on 50020,5,main] timed out
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
> java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
> org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:135)
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:2052)
> 2016-12-16 19:58:56,167 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> IOException in BlockReceiver constructor. Cause is
> 2016-12-16 19:58:56,168 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: 
> sj1dra082.corp.adobe.com:50010:DataXceiver error processing WRITE_BLOCK 
> operation  src: /10.10.0.80:44105 dst: /10.10.0.82:50010
> java.io.IOException: Join on writer thread Thread[IPC Server handler 6 on 
> 50020,5,main] timed out
> at 
> org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline.stopWriter(ReplicaInPipeline.java:212)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:1579)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:195)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:669)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:246)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> There is also a logic error in FsDatasetImpl#createTemporary, in which if the 
> code in the synchronized block executes for more than 60 seconds (in theory), 
> it could throw an exception, without trying to stop the existing slow writer.
> We saw a FsDatasetImpl#createTemporary failed after nearly 10 minutes, and 
> it's unclear why yet. It's my understanding that the code intends to stop 
> slow writers after 1 minute by default. Some code rewrite is probably needed 
> to get the logic right.
> {noformat}
> 2016-12-16 23:12:24,636 WARN 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Unable 
> to stop existing writer for block 
> BP-1527842723-10.0.0.180-1367984731269:blk_4313782210_1103780331023 after 
> 568320 miniseconds.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6122) Rebalance cached replicas between datanodes

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6122:
-
Target Version/s:   (was: 2.8.0)

> Rebalance cached replicas between datanodes
> ---
>
> Key: HDFS-6122
> URL: https://issues.apache.org/jira/browse/HDFS-6122
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.3.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>
> It'd be nice if the NameNode was able to rebalance cache usage among 
> datanodes. This would help avoid situations where the only three DNs with a 
> replica are full and there is still cache space on the rest of the cluster. 
> It'll also probably help for heterogeneous node sizes and when adding new 
> nodes to the cluster or doing a rolling restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10406) Test failure on trunk: TestReconstructStripedBlocks

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-10406:
--
Target Version/s:   (was: 2.8.0)

> Test failure on trunk: TestReconstructStripedBlocks
> ---
>
> Key: HDFS-10406
> URL: https://issues.apache.org/jira/browse/HDFS-10406
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Xiaobing Zhou
>
> It's been noticed there are some test failures: TestEditLog and 
> TestReconstructStripedBlocks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6708) StorageType should be encoded in the block token

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6708:
-
Target Version/s:   (was: 2.8.0)

> StorageType should be encoded in the block token
> 
>
> Key: HDFS-6708
> URL: https://issues.apache.org/jira/browse/HDFS-6708
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 2.4.1
>Reporter: Arpit Agarwal
>Assignee: Pieter Reuse
>
> HDFS-6702 is adding support for file creation based on StorageType.
> The block token is used as a tamper-proof channel for communicating block 
> parameters from the NN to the DN during block creation. The StorageType 
> should be included in this block token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9499) Fix typos in DFSAdmin.java

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-9499:
-
Target Version/s:   (was: 2.8.0)

> Fix typos in DFSAdmin.java
> --
>
> Key: HDFS-9499
> URL: https://issues.apache.org/jira/browse/HDFS-9499
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Daniel Green
>
> There are multiple instances of 'snapshot' spelled as 'snaphot' in 
> DFSAdmin.java and TestSnapshotCommands.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6212) Deprecate the BackupNode and CheckpointNode from branch-2

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6212:
-
Target Version/s:   (was: 2.8.0)

> Deprecate the BackupNode and CheckpointNode from branch-2
> -
>
> Key: HDFS-6212
> URL: https://issues.apache.org/jira/browse/HDFS-6212
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Jing Zhao
>
> As per discussion in HDFS-4114, this jira tries to deprecate BackupNode from 
> branch-2 and change the hadoop start/stop scripts to print deprecation 
> warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2017-01-06 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805935#comment-15805935
 ] 

Wei-Chiu Chuang commented on HDFS-10429:


I have not been able to make progress on this.

But for the record, I suspect this bug could result in resource leakage 
(threads, sockets for example). That said, resource leakage shouldn't be a big 
issue. Threads will be terminated anyway if CLI command process stops.

> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Zhiyuan Yang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HDFS-10429.1.patch, HDFS-10429.2.patch, 
> HDFS-10429.3.patch
>
>
> Every time I use 'hdfs dfs -put' upload file, this warning is printed:
> {code:java}
> 16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Thread.join(Thread.java:1245)
>   at java.lang.Thread.join(Thread.java:1319)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
>   at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
> {code}
> The reason is this: originally, DataStreamer::closeResponder always prints a 
> warning about InterruptedException; since HDFS-9812, 
> DFSOutputStream::closeImpl  always forces threads to close, which causes 
> InterruptedException.
> A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6220) Webhdfs should differentiate remote exceptions

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6220:
-
Target Version/s:   (was: 2.8.0)

> Webhdfs should differentiate remote exceptions
> --
>
> Key: HDFS-6220
> URL: https://issues.apache.org/jira/browse/HDFS-6220
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>
> Webhdfs's validateResponse should use a distinct exception for json decoded 
> exceptions than it does for servlet exceptions.  Ex. A servlet generated 404 
> with a json payload should be distinguishable from a http proxy or jetty 
> generated 404.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8982) Consolidate getFileReplication and getPreferredBlockReplication in INodeFile

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8982:
-
Target Version/s:   (was: 2.8.0)

> Consolidate getFileReplication and getPreferredBlockReplication in INodeFile
> 
>
> Key: HDFS-8982
> URL: https://issues.apache.org/jira/browse/HDFS-8982
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>
> Currently {{INodeFile}} provides both {{getFileReplication}} and 
> {{getPreferredBlockReplication}} interfaces. At the very least they should be 
> renamed (e.g. {{getCurrentFileReplication}} and 
> {{getMaxConfiguredFileReplication}}), with clearer Javadoc.
> I also suspect we are not using them correctly in all places right now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6884) Include the hostname in HTTPFS log filenames

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6884:
-
Target Version/s:   (was: 2.8.0)

> Include the hostname in HTTPFS log filenames
> 
>
> Key: HDFS-6884
> URL: https://issues.apache.org/jira/browse/HDFS-6884
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.5.0
>Reporter: Andrew Wang
>Assignee: Alejandro Abdelnur
>
> It'd be good to include the hostname in the httpfs log filenames. Right now 
> we have httpfs.log and httpfs-audit.log, it'd be nice to have e.g. 
> "httpfs-${hostname}.log".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6891) Follow-on work for transparent data at rest encryption

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6891:
-
Target Version/s:   (was: 2.8.0)

> Follow-on work for transparent data at rest encryption
> --
>
> Key: HDFS-6891
> URL: https://issues.apache.org/jira/browse/HDFS-6891
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.5.0
>Reporter: Andrew Wang
>Assignee: Charles Lamb
>
> This is an umbrella JIRA to track remaining subtasks from HDFS-6134.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6935) RawLocalFileSystem::supportsSymLinks is confusing

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6935:
-
Target Version/s:   (was: 2.8.0)

> RawLocalFileSystem::supportsSymLinks is confusing 
> --
>
> Key: HDFS-6935
> URL: https://issues.apache.org/jira/browse/HDFS-6935
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: symlinks
>Affects Versions: 2.5.0, 2.6.0
>Reporter: Gopal V
>
> {code}
> ...
>   @Override
>   public boolean supportsSymlinks() {
> return true;
>   }
>   @SuppressWarnings("deprecation")
>   @Override
>   public void createSymlink(Path target, Path link, boolean createParent)
>   throws IOException {
> if (!FileSystem.areSymlinksEnabled()) {
>   throw new UnsupportedOperationException("Symlinks not supported");
> }
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6691) The message on NN UI can be confusing during a rolling upgrade

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6691:
-
Target Version/s:   (was: 2.8.0)

> The message on NN UI can be confusing during a rolling upgrade 
> ---
>
> Key: HDFS-6691
> URL: https://issues.apache.org/jira/browse/HDFS-6691
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Mit Desai
>Assignee: Mit Desai
> Attachments: ha1.png, ha2.png
>
>
> On ANN, it says rollback image was created. On SBN, it says otherwise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6846) NetworkTopology#sortByDistance should give nodes higher priority, which cache the block.

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6846:
-
Target Version/s:   (was: 2.8.0)

> NetworkTopology#sortByDistance should give nodes higher priority, which cache 
> the block.
> 
>
> Key: HDFS-6846
> URL: https://issues.apache.org/jira/browse/HDFS-6846
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>
> Currently there are 3 weights:
> * local
> * same rack
> * off rack
> But if some nodes cache the block, then it's faster if client read block from 
> these nodes. So we should have some more weights as following:
> * local
> * cached & same rack
> * same rack
> * cached & off rack
> * off rack



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6919) Enforce a single limit for RAM disk usage and replicas cached via locking

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6919:
-
Target Version/s:   (was: 2.8.0)

> Enforce a single limit for RAM disk usage and replicas cached via locking
> -
>
> Key: HDFS-6919
> URL: https://issues.apache.org/jira/browse/HDFS-6919
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The DataNode can have a single limit for memory usage which applies to both 
> replicas cached via CCM and replicas on RAM disk.
> See comments 
> [1|https://issues.apache.org/jira/browse/HDFS-6581?focusedCommentId=14106025=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14106025],
>  
> [2|https://issues.apache.org/jira/browse/HDFS-6581?focusedCommentId=14106245=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14106245]
>  and 
> [3|https://issues.apache.org/jira/browse/HDFS-6581?focusedCommentId=14106575=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14106575]
>  for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6764) DN heartbeats may become clumped together

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6764:
-
Target Version/s:   (was: 2.8.0)

> DN heartbeats may become clumped together
> -
>
> Key: HDFS-6764
> URL: https://issues.apache.org/jira/browse/HDFS-6764
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Daryn Sharp
> Attachments: Screen Shot 2014-07-28 at 11.12.06 AM.png
>
>
> DNs send heartbeats on a fixed schedule based on the last time a heartbeat 
> was sent.  If the NN takes longer to respond than the heartbeat interval then 
> DNs do not sleep until the next interval.  Instead, another heartbeat is 
> immediately sent and all DNs begin heartbeating on the same schedule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6333) REST API for fetching directory listing file from NN

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6333:
-
Target Version/s:   (was: 2.8.0)

> REST API for fetching directory listing file from NN
> 
>
> Key: HDFS-6333
> URL: https://issues.apache.org/jira/browse/HDFS-6333
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.4.0
>Reporter: Andrew Wang
>
> It'd be convenient if the NameNode supported fetching the directory listing 
> via HTTP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6128) Implement libhdfs bindings for HDFS ACL APIs.

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6128:
-
Target Version/s:   (was: 2.8.0)

> Implement libhdfs bindings for HDFS ACL APIs.
> -
>
> Key: HDFS-6128
> URL: https://issues.apache.org/jira/browse/HDFS-6128
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.4.0
>Reporter: Chris Nauroth
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10930) Refactor: Wrap Datanode IO related operations

2017-01-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15805554#comment-15805554
 ] 

Arpit Agarwal edited comment on HDFS-10930 at 1/6/17 10:06 PM:
---

The local test-patch run for branch-2 (using HDFS-10930-branch-2.002.patch) 
came back clean.

| Vote |  Subsystem |  Runtime   | Comment
|  +1  |   @author  |  0m 0s | The patch does not contain any @author
|  ||| tags.
|  +1  |test4tests  |  0m 0s | The patch appears to include 6 new or
|  ||| modified test files.
|  +1  |mvninstall  |  3m 25s| branch-2 passed
|  +1  |   compile  |  0m 27s| branch-2 passed
|  +1  |checkstyle  |  0m 24s| branch-2 passed
|  +1  |   mvnsite  |  0m 31s| branch-2 passed
|  +1  |mvneclipse  |  0m 11s| branch-2 passed
|  +1  |  findbugs  |  0m 58s| branch-2 passed
|  +1  |   javadoc  |  0m 38s| branch-2 passed
|  +1  |mvninstall  |  0m 26s| the patch passed
|  +1  |   compile  |  0m 23s| the patch passed
|  +1  | javac  |  0m 23s| the patch passed
|  -1  |checkstyle  |  0m 20s| hadoop-hdfs-project/hadoop-hdfs: The
|  ||| patch generated 3 new + 530 unchanged -
|  ||| 13 fixed = 533 total (was 543)
|  +1  |   mvnsite  |  0m 28s| the patch passed
|  +1  |mvneclipse  |  0m 8s | the patch passed
|  +1  |whitespace  |  0m 0s | The patch has no whitespace issues.
|  +1  |  findbugs  |  1m 2s | the patch passed
|  +1  |   javadoc  |  0m 35s| the patch passed
|  +1  |  unit  |  201m 40s  | hadoop-hdfs in the patch passed.
|  +1  |asflicense  |  0m 12s| The patch does not generate ASF License
|  ||| warnings.
|  ||  212m 42s  |

The checkstyle issues are existing method length warnings.

Will push this to branch-2 along with HDFS-11253 to address the earlier comment.


was (Author: arpitagarwal):
The local test-patch run for branch-2 came back clean.

| Vote |  Subsystem |  Runtime   | Comment
|  +1  |   @author  |  0m 0s | The patch does not contain any @author
|  ||| tags.
|  +1  |test4tests  |  0m 0s | The patch appears to include 6 new or
|  ||| modified test files.
|  +1  |mvninstall  |  3m 25s| branch-2 passed
|  +1  |   compile  |  0m 27s| branch-2 passed
|  +1  |checkstyle  |  0m 24s| branch-2 passed
|  +1  |   mvnsite  |  0m 31s| branch-2 passed
|  +1  |mvneclipse  |  0m 11s| branch-2 passed
|  +1  |  findbugs  |  0m 58s| branch-2 passed
|  +1  |   javadoc  |  0m 38s| branch-2 passed
|  +1  |mvninstall  |  0m 26s| the patch passed
|  +1  |   compile  |  0m 23s| the patch passed
|  +1  | javac  |  0m 23s| the patch passed
|  -1  |checkstyle  |  0m 20s| hadoop-hdfs-project/hadoop-hdfs: The
|  ||| patch generated 3 new + 530 unchanged -
|  ||| 13 fixed = 533 total (was 543)
|  +1  |   mvnsite  |  0m 28s| the patch passed
|  +1  |mvneclipse  |  0m 8s | the patch passed
|  +1  |whitespace  |  0m 0s | The patch has no whitespace issues.
|  +1  |  findbugs  |  1m 2s | the patch passed
|  +1  |   javadoc  |  0m 35s| the patch passed
|  +1  |  unit  |  201m 40s  | hadoop-hdfs in the patch passed.
|  +1  |asflicense  |  0m 12s| The patch does not generate ASF License
|  ||| warnings.
|  ||  212m 42s  |

The checkstyle issues are existing method length warnings.

Will push this to branch-2 along with HDFS-11253 to address the earlier comment.

> Refactor: Wrap Datanode IO related operations
> -
>
> Key: HDFS-10930
> URL: https://issues.apache.org/jira/browse/HDFS-10930
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-10930-branch-2.00.patch, 
> HDFS-10930-branch-2.001.patch, HDFS-10930-branch-2.002.patch, 
> HDFS-10930.01.patch, HDFS-10930.02.patch, HDFS-10930.03.patch, 
> HDFS-10930.04.patch, HDFS-10930.05.patch, HDFS-10930.06.patch, 
> HDFS-10930.07.patch, HDFS-10930.08.patch, HDFS-10930.09.patch, 
> HDFS-10930.10.patch, HDFS-10930.11.patch, HDFS-10930.barnch-2.00.patch
>
>
> Datanode IO (Disk/Network) related operations and instrumentations are 
> currently spilled in many classes such as DataNode.java, BlockReceiver.java, 
> BlockSender.java, FsDatasetImpl.java, 

[jira] [Updated] (HDFS-8510) Provide different timeout settings for hdfs dfsadmin -getDatanodeInfo.

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8510:
-
Target Version/s:   (was: 2.8.0)

> Provide different timeout settings for hdfs dfsadmin -getDatanodeInfo.
> --
>
> Key: HDFS-8510
> URL: https://issues.apache.org/jira/browse/HDFS-8510
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>
> During a rolling upgrade, an administrator runs {{hdfs dfsadmin 
> -getDatanodeInfo}} to check if a DataNode has stopped.  Currently, this 
> operation is subject to the RPC connection retries defined in 
> {{ipc.client.connect.max.retries}} and {{ipc.client.connect.retry.interval}}. 
>  This issue proposes adding separate configuration properties to control the 
> retries for this operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8555) Random read support on HDFS files using Indexed Namenode feature

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-8555:
-
Target Version/s:   (was: 2.8.0)

> Random read support on HDFS files using Indexed Namenode feature
> 
>
> Key: HDFS-8555
> URL: https://issues.apache.org/jira/browse/HDFS-8555
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Affects Versions: 2.5.2
> Environment: Linux
>Reporter: amit sehgal
>Assignee: Afzal Saan
>   Original Estimate: 720h
>  Remaining Estimate: 720h
>
> Currently Namenode does not provide support to do random reads. With so many 
> tools built on top of HDFS solving the use case of Exploratory BI and 
> providing SQL over HDFS. The need of hour is to reduce the number of blocks 
> read for a Random read. 
> E.g. extracting say 10 lines worth of information out of 100GB files should 
> be reading only those block which can potentially have those 10 lines.
> This can be achieved by adding a tagging feature per block in name node, each 
> block written to HDFS will have tags associated to it stored in index.
> Namednode when access via the Indexing feature will use this index native to 
> reduce the no. of block returned to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6151) HDFS should refuse to cache blocks >=2GB

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6151:
-
Target Version/s:   (was: 2.8.0)

> HDFS should refuse to cache blocks >=2GB
> 
>
> Key: HDFS-6151
> URL: https://issues.apache.org/jira/browse/HDFS-6151
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching, datanode
>Affects Versions: 2.4.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>
> If you try to cache a block that's >=2GB, the DN will silently fail to cache 
> it since {{MappedByteBuffer}} uses a signed int to represent size. Blocks 
> this large are rare, but we should log or alert the user somehow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6332) Support protobuf-based directory listing output suitable for OIV

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6332:
-
Target Version/s:   (was: 2.8.0)

> Support protobuf-based directory listing output suitable for OIV
> 
>
> Key: HDFS-6332
> URL: https://issues.apache.org/jira/browse/HDFS-6332
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.4.0
>Reporter: Andrew Wang
>
> The initial listing output may be in JSON, but protobuf would be a better 
> serialization format down the road.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6292) Display HDFS per user and per group usage on the webUI

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-6292:
-
Target Version/s:   (was: 2.8.0)

> Display HDFS per user and per group usage on the webUI
> --
>
> Key: HDFS-6292
> URL: https://issues.apache.org/jira/browse/HDFS-6292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.4.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-6292.01.patch, HDFS-6292.patch, HDFS-6292.png
>
>
> It would be nice to show HDFS usage per user and per group on a web ui.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >