[jira] [Updated] (HDFS-11984) Ozone: Ensures listKey lists all required key fields

2017-07-31 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11984:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

> Ozone: Ensures listKey lists all required key fields
> 
>
> Key: HDFS-11984
> URL: https://issues.apache.org/jira/browse/HDFS-11984
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yiqun Lin
> Fix For: HDFS-7240
>
> Attachments: HDFS-11984-HDFS-7240.001.patch, 
> HDFS-11984-HDFS-7240.002.patch, HDFS-11984-HDFS-7240.003.patch, 
> HDFS-11984-HDFS-7240.004.patch
>
>
> HDFS-11782 implements the listKey operation which only lists the basic key 
> fields, we need to make sure it return all required fields
> # version
> # md5hash
> # createdOn
> # size
> # keyName
> this task is depending on the work of HDFS-11886. See more discussion [here | 
> https://issues.apache.org/jira/browse/HDFS-11782?focusedCommentId=16045562=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16045562].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11984) Ozone: Ensures listKey lists all required key fields

2017-07-31 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108405#comment-16108405
 ] 

Weiwei Yang commented on HDFS-11984:


Looks good, the failure tests seemed not related. I am going to commit this 
shortly.

> Ozone: Ensures listKey lists all required key fields
> 
>
> Key: HDFS-11984
> URL: https://issues.apache.org/jira/browse/HDFS-11984
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yiqun Lin
> Attachments: HDFS-11984-HDFS-7240.001.patch, 
> HDFS-11984-HDFS-7240.002.patch, HDFS-11984-HDFS-7240.003.patch, 
> HDFS-11984-HDFS-7240.004.patch
>
>
> HDFS-11782 implements the listKey operation which only lists the basic key 
> fields, we need to make sure it return all required fields
> # version
> # md5hash
> # createdOn
> # size
> # keyName
> this task is depending on the work of HDFS-11886. See more discussion [here | 
> https://issues.apache.org/jira/browse/HDFS-11782?focusedCommentId=16045562=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16045562].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9213) Minicluster with Kerberos generates some stacks when checking the ports

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108402#comment-16108402
 ] 

Hadoop QA commented on HDFS-9213:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
55s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 7 unchanged - 0 fixed = 10 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-9213 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12766796/hdfs-9213.v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 24833789ce4b 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a4aa1cb |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20512/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20512/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Comment Edited] (HDFS-12163) Ozone: MiniOzoneCluster uses 400+ threads

2017-07-31 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108392#comment-16108392
 ] 

Weiwei Yang edited comment on HDFS-12163 at 8/1/17 5:27 AM:


Hi [~anu]

Here is the result after set 20 handler for KSM,

| (distributed,1) | init | 6 |
| (distributed,1) | MiniOzoneCluster | 222 |
| (distributed,1) | shutdown | 80 |
| (distributed,1) | sleep | 13 |

I will overwrite this value to 20 in {{MiniOzoneCluster}} since 200 is not 
necessary for testing. 

bq. Xiaoyu Yao has fixed a bunch of leak issues, they were due to us not 
closing the miniOzoneCluster.

Yes, it looks like some of the leaks are already fixed, we can see the number 
of open threads decreased to 13 now. I will take a look if there is any other 
potential leak and fix them if I found any.


was (Author: cheersyang):
Hi [~anu]

Here is the result after set 20 handler for KSM,

| (distributed,1) | init | 6 |
| (distributed,1) | MiniOzoneCluster | 222 |
| (distributed,1) | shutdown | 80 |
| (distributed,1) | sleep | 13 |

I will overwrite this value to 20 in {{MiniOzoneCluster}} since 200 is not 
necessary for testing. 

> Ozone: MiniOzoneCluster uses 400+ threads
> -
>
> Key: HDFS-12163
> URL: https://issues.apache.org/jira/browse/HDFS-12163
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Weiwei Yang
> Attachments: most_used_threads.png, 
> TestOzoneThreadCount20170719.patch, thread_dump.png
>
>
> Checked the number of active threads used in MiniOzoneCluster with various 
> settings:
> - Local handlers
> - Distributed handlers
> - Ratis-Netty
> - Ratis-gRPC
> The results are similar for all the settings.  It uses 400+ threads for an 
> 1-datanode MiniOzoneCluster.
> Moreover, there is a thread leak -- a number of the threads do not shutdown 
> after the test is finished.  Therefore, when tests run consecutively, the 
> later tests use more threads.
> Will post the details in comments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12163) Ozone: MiniOzoneCluster uses 400+ threads

2017-07-31 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108392#comment-16108392
 ] 

Weiwei Yang commented on HDFS-12163:


Hi [~anu]

Here is the result after set 20 handler for KSM,

| (distributed,1) | init | 6 |
| (distributed,1) | MiniOzoneCluster | 222 |
| (distributed,1) | shutdown | 80 |
| (distributed,1) | sleep | 13 |

I will overwrite this value to 20 in {{MiniOzoneCluster}} since 200 is not 
necessary for testing. 

> Ozone: MiniOzoneCluster uses 400+ threads
> -
>
> Key: HDFS-12163
> URL: https://issues.apache.org/jira/browse/HDFS-12163
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Weiwei Yang
> Attachments: most_used_threads.png, 
> TestOzoneThreadCount20170719.patch, thread_dump.png
>
>
> Checked the number of active threads used in MiniOzoneCluster with various 
> settings:
> - Local handlers
> - Distributed handlers
> - Ratis-Netty
> - Ratis-gRPC
> The results are similar for all the settings.  It uses 400+ threads for an 
> 1-datanode MiniOzoneCluster.
> Moreover, there is a thread leak -- a number of the threads do not shutdown 
> after the test is finished.  Therefore, when tests run consecutively, the 
> later tests use more threads.
> Will post the details in comments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108388#comment-16108388
 ] 

Hadoop QA commented on HDFS-12151:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879765/HDFS-12151.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c587a50d23d6 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1a78c0f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20511/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20511/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20511/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20511/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> 

[jira] [Commented] (HDFS-12163) Ozone: MiniOzoneCluster uses 400+ threads

2017-07-31 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108378#comment-16108378
 ] 

Anu Engineer commented on HDFS-12163:
-

 that is a nice find, can you please post the thread count with something like 
20 KSM handler count? Also, we need to file a JIRA to track that issue. We 
should not launch that many threads unless we need them, but I am guessing this 
is part of RPC layer, so we might not want to change it now.


> Ozone: MiniOzoneCluster uses 400+ threads
> -
>
> Key: HDFS-12163
> URL: https://issues.apache.org/jira/browse/HDFS-12163
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Weiwei Yang
> Attachments: most_used_threads.png, 
> TestOzoneThreadCount20170719.patch, thread_dump.png
>
>
> Checked the number of active threads used in MiniOzoneCluster with various 
> settings:
> - Local handlers
> - Distributed handlers
> - Ratis-Netty
> - Ratis-gRPC
> The results are similar for all the settings.  It uses 400+ threads for an 
> 1-datanode MiniOzoneCluster.
> Moreover, there is a thread leak -- a number of the threads do not shutdown 
> after the test is finished.  Therefore, when tests run consecutively, the 
> later tests use more threads.
> Will post the details in comments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12163) Ozone: MiniOzoneCluster uses 400+ threads

2017-07-31 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108379#comment-16108379
 ] 

Anu Engineer commented on HDFS-12163:
-

bq.  I will check the leak issue a bit later.
[~xyao] has fixed a bunch of leak issues, they were due to us not closing the 
miniOzoneCluster.

> Ozone: MiniOzoneCluster uses 400+ threads
> -
>
> Key: HDFS-12163
> URL: https://issues.apache.org/jira/browse/HDFS-12163
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Weiwei Yang
> Attachments: most_used_threads.png, 
> TestOzoneThreadCount20170719.patch, thread_dump.png
>
>
> Checked the number of active threads used in MiniOzoneCluster with various 
> settings:
> - Local handlers
> - Distributed handlers
> - Ratis-Netty
> - Ratis-gRPC
> The results are similar for all the settings.  It uses 400+ threads for an 
> 1-datanode MiniOzoneCluster.
> Moreover, there is a thread leak -- a number of the threads do not shutdown 
> after the test is finished.  Therefore, when tests run consecutively, the 
> later tests use more threads.
> Will post the details in comments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12157) Do fsyncDirectory(..) outside of FSDataset lock

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108367#comment-16108367
 ] 

Hadoop QA commented on HDFS-12157:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 8s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 125 unchanged - 1 fixed = 125 total (was 126) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | hadoop.hdfs.web.TestHttpsFileSystem |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.web.TestHttpsFileSystem |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (HDFS-11984) Ozone: Ensures listKey lists all required key fields

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108366#comment-16108366
 ] 

Hadoop QA commented on HDFS-11984:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
52s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.TestDatanodeStateMachine |
|   | hadoop.ozone.ksm.TestKSMMetrcis |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.ozone.web.client.TestKeys |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11984 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879764/HDFS-11984-HDFS-7240.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c9e20e3a4fef 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c49297b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20510/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20510/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20510/console |
| Powered by 

[jira] [Updated] (HDFS-12163) Ozone: MiniOzoneCluster uses 400+ threads

2017-07-31 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12163:
---
Attachment: most_used_threads.png

> Ozone: MiniOzoneCluster uses 400+ threads
> -
>
> Key: HDFS-12163
> URL: https://issues.apache.org/jira/browse/HDFS-12163
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Weiwei Yang
> Attachments: most_used_threads.png, 
> TestOzoneThreadCount20170719.patch, thread_dump.png
>
>
> Checked the number of active threads used in MiniOzoneCluster with various 
> settings:
> - Local handlers
> - Distributed handlers
> - Ratis-Netty
> - Ratis-gRPC
> The results are similar for all the settings.  It uses 400+ threads for an 
> 1-datanode MiniOzoneCluster.
> Moreover, there is a thread leak -- a number of the threads do not shutdown 
> after the test is finished.  Therefore, when tests run consecutively, the 
> later tests use more threads.
> Will post the details in comments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12163) Ozone: MiniOzoneCluster uses 400+ threads

2017-07-31 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12163:
---
Attachment: thread_dump.png

> Ozone: MiniOzoneCluster uses 400+ threads
> -
>
> Key: HDFS-12163
> URL: https://issues.apache.org/jira/browse/HDFS-12163
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Weiwei Yang
> Attachments: TestOzoneThreadCount20170719.patch, thread_dump.png
>
>
> Checked the number of active threads used in MiniOzoneCluster with various 
> settings:
> - Local handlers
> - Distributed handlers
> - Ratis-Netty
> - Ratis-gRPC
> The results are similar for all the settings.  It uses 400+ threads for an 
> 1-datanode MiniOzoneCluster.
> Moreover, there is a thread leak -- a number of the threads do not shutdown 
> after the test is finished.  Therefore, when tests run consecutively, the 
> later tests use more threads.
> Will post the details in comments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12163) Ozone: MiniOzoneCluster uses 400+ threads

2017-07-31 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108359#comment-16108359
 ] 

Weiwei Yang commented on HDFS-12163:


Run 1 DN in mini ozone cluster, there were *402* threads once the cluster is 
up, checked the thread dump, there are *280* RPC handler threads, with 
following trace

{code}
IPC Server handler 0 on 51641@2330 - priority:5 - threadId:0x45 - nativeId:NA - 
state:WAITING
stackTrace:
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Unsafe.java:-1)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:287)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2661)
{code}

this is because
 {{dfs.storage.service.handler.count}} = {{10}}
 {{ozone.ksm.handler.count.key}} = {{200}}

we are setting a pretty big value for KSM handlers, this is not necessary for a 
mini cluster, we can overwrite this config in mini cluster to reduce the number 
of threads. I will check the leak issue a bit later.

> Ozone: MiniOzoneCluster uses 400+ threads
> -
>
> Key: HDFS-12163
> URL: https://issues.apache.org/jira/browse/HDFS-12163
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Weiwei Yang
> Attachments: TestOzoneThreadCount20170719.patch
>
>
> Checked the number of active threads used in MiniOzoneCluster with various 
> settings:
> - Local handlers
> - Distributed handlers
> - Ratis-Netty
> - Ratis-gRPC
> The results are similar for all the settings.  It uses 400+ threads for an 
> 1-datanode MiniOzoneCluster.
> Moreover, there is a thread leak -- a number of the threads do not shutdown 
> after the test is finished.  Therefore, when tests run consecutively, the 
> later tests use more threads.
> Will post the details in comments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12173) MiniDFSCluster cannot reliably use NameNode#stop

2017-07-31 Thread Ajay Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108346#comment-16108346
 ] 

Ajay Yadav commented on HDFS-12173:
---

 [~daryn], Could you please share more information on this. Since Namenode has 
only one stop method does it make sense to update Stop method itself to call 
HAState#setState(context, HAServiceState.STOPPING)?

> MiniDFSCluster cannot reliably use NameNode#stop
> 
>
> Key: HDFS-12173
> URL: https://issues.apache.org/jira/browse/HDFS-12173
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>
> Sporadic test failures occur because {{NameNode#stop}} used by the mini 
> cluster does not properly manage the HA context's state.  It directly calls 
> {{HAState#exitState(context)}} instead of 
> {{HAState#setState(context,state)}}.  The latter will properly lock the 
> namesystem and update the ha state while locked, while the former does not.  
> The result is that while the cluster is stopping, the lock is released and 
> any queued rpc calls think the NN is still active and are processed while the 
> NN is in an unstable half-stopped state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9213) Minicluster with Kerberos generates some stacks when checking the ports

2017-07-31 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108321#comment-16108321
 ] 

mingleizhang commented on HDFS-9213:


We should keep forward working on this jira. In flink project, we can not run 
some tests until this issue fixed.

> Minicluster with Kerberos generates some stacks when checking the ports
> ---
>
> Key: HDFS-9213
> URL: https://issues.apache.org/jira/browse/HDFS-9213
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0-alpha1
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Attachments: hdfs-9213.v1.patch, hdfs-9213.v1.patch
>
>
> When using the minicluster with kerberos the various checks in 
> SecureDataNodeStarter fail because the ports are not fixed.
> Stacks like this one:
> {quote}
> java.lang.RuntimeException: Unable to bind on specified streaming port in 
> secure context. Needed 0, got 49670
>   at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:108)
> {quote}
> There is already a setting to desactivate this type of check for testing, it 
> could be used here as well



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9213) Minicluster with Kerberos generates some stacks when checking the ports

2017-07-31 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108322#comment-16108322
 ] 

mingleizhang commented on HDFS-9213:


[~linyiqun] Could you please take a look on this PR ? Thanks. :)

> Minicluster with Kerberos generates some stacks when checking the ports
> ---
>
> Key: HDFS-9213
> URL: https://issues.apache.org/jira/browse/HDFS-9213
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0-alpha1
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>Priority: Minor
> Attachments: hdfs-9213.v1.patch, hdfs-9213.v1.patch
>
>
> When using the minicluster with kerberos the various checks in 
> SecureDataNodeStarter fail because the ports are not fixed.
> Stacks like this one:
> {quote}
> java.lang.RuntimeException: Unable to bind on specified streaming port in 
> secure context. Needed 0, got 49670
>   at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:108)
> {quote}
> There is already a setting to desactivate this type of check for testing, it 
> could be used here as well



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11826) Federation Namenode Heartbeat

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108319#comment-16108319
 ] 

Hadoop QA commented on HDFS-11826:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
24s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-10467 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10467 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 654 unchanged - 0 fixed = 657 total (was 654) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11826 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879755/HDFS-11826-HDFS-10467-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux fd1c99af9284 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / fae1d1e |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20508/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20508/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes

2017-07-31 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108308#comment-16108308
 ] 

Sean Mackrory commented on HDFS-12151:
--

Ah sorry about that - I seem to be blind to the yellow checkstyle warnings...

I did confirm that the test failures are flaky. They succeed locally and 
timeouts also occurred in the same class (often the same function) in several 
recent runs of the Pre-Commit jobs. Of course I just closed the editor where I 
noted all the URLs to said jobs, but they're there and they're recent, I 
promise :)

 Thanks for the review! Attaching a final patch with the checkstyle issues 
addressed. Reran the new test and the 2 that failed last time locally, and had 
a clean Yetus run.

> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> 
>
> Key: HDFS-12151
> URL: https://issues.apache.org/jira/browse/HDFS-12151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha4
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HDFS-12151.001.patch, HDFS-12151.002.patch, 
> HDFS-12151.003.patch, HDFS-12151.004.patch, HDFS-12151.005.patch, 
> HDFS-12151.006.patch, HDFS-12151.007.patch
>
>
> Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently 
> fails. On the client side it looks like this:
> {code}
> 17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in 
> createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code}
> But on the DataNode side there's an ArrayOutOfBoundsException because there 
> aren't any targetStorageIds:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes

2017-07-31 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-12151:
-
Attachment: HDFS-12151.007.patch

> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> 
>
> Key: HDFS-12151
> URL: https://issues.apache.org/jira/browse/HDFS-12151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha4
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HDFS-12151.001.patch, HDFS-12151.002.patch, 
> HDFS-12151.003.patch, HDFS-12151.004.patch, HDFS-12151.005.patch, 
> HDFS-12151.006.patch, HDFS-12151.007.patch
>
>
> Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently 
> fails. On the client side it looks like this:
> {code}
> 17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in 
> createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code}
> But on the DataNode side there's an ArrayOutOfBoundsException because there 
> aren't any targetStorageIds:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11984) Ozone: Ensures listKey lists all required key fields

2017-07-31 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11984:
-
Attachment: HDFS-11984-HDFS-7240.004.patch

Rebase the code and attach the new patch. After this, I'd like to add creation 
time in bucket/volume info as well this week. It will be a useful field to show 
in the response.

> Ozone: Ensures listKey lists all required key fields
> 
>
> Key: HDFS-11984
> URL: https://issues.apache.org/jira/browse/HDFS-11984
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yiqun Lin
> Attachments: HDFS-11984-HDFS-7240.001.patch, 
> HDFS-11984-HDFS-7240.002.patch, HDFS-11984-HDFS-7240.003.patch, 
> HDFS-11984-HDFS-7240.004.patch
>
>
> HDFS-11782 implements the listKey operation which only lists the basic key 
> fields, we need to make sure it return all required fields
> # version
> # md5hash
> # createdOn
> # size
> # keyName
> this task is depending on the work of HDFS-11886. See more discussion [here | 
> https://issues.apache.org/jira/browse/HDFS-11782?focusedCommentId=16045562=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16045562].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12157) Do fsyncDirectory(..) outside of FSDataset lock

2017-07-31 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-12157:
-
Attachment: HDFS-12157-branch-2.7-01.patch

Attaching branch-2.7 patch again for jenkins

> Do fsyncDirectory(..) outside of FSDataset lock
> ---
>
> Key: HDFS-12157
> URL: https://issues.apache.org/jira/browse/HDFS-12157
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HDFS-12157-01.patch, HDFS-12157-branch-2-01.patch, 
> HDFS-12157-branch-2.7-01.patch, HDFS-12157-branch-2.7-01.patch
>
>
> HDFS-5042 introduced fsyncDirectory(..) to save blocks from power failure. 
> Do it outside of FSDataset lock to avoid overall performance degradation if 
> disk takes more time to sync.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12217) HDFS snapshots doesn't capture all open files when one of the open files is deleted

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108296#comment-16108296
 ] 

Hadoop QA commented on HDFS-12217:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12217 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879752/HDFS-12217.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 425425fe3a9a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ea56812 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20507/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20507/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20507/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20507/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> HDFS snapshots doesn't capture all open files when one of the open files is 
> deleted
> 

[jira] [Commented] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-07-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108275#comment-16108275
 ] 

Andrew Wang commented on HDFS-11082:


Hi [~Sammi] any progress on this one?

> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11826) Federation Namenode Heartbeat

2017-07-31 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-11826:
---
Attachment: HDFS-11826-HDFS-10467-001.patch

* Fixing unit tests.
* Fixing check styles.
* Adding message when the heartbeating is not properly configured.

> Federation Namenode Heartbeat
> -
>
> Key: HDFS-11826
> URL: https://issues.apache.org/jira/browse/HDFS-11826
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-11826-HDFS-10467-000.patch, 
> HDFS-11826-HDFS-10467-001.patch
>
>
> Add a service to the Router to check the state of a Namenode and report it 
> into the State Store.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11580) Ozone: Support asynchronus client API for SCM and containers

2017-07-31 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108229#comment-16108229
 ] 

Anu Engineer commented on HDFS-11580:
-

[~vagarychen], [~linyiqun], [~nanda], [~msingh] Thanks for the patches and code 
reviews. I think one of my old changes is creating a road block in this work. 
Allow me to explain.

# When I first wrote this code, I wanted to offer a synchronous interface to 
make it easy to write clients. So I painfully wrapped the Async interface 
offered by Netty into a Sync one -- Probably due to a mistaken impression that 
it will make the life of programmers easy.
# Then when we came to do the async work -- what we should have done in the 
first place is to make the underlying netty's async interface first class, 
which is currently hidden under the {{sendCommand}} function.
# Instead, since the code already offers a synchronous interface, this code 
patch tries to create a thread pool and attempts to call into the "Sync" 
interface offered by the XcieverClient. I propose that we *don't* use a thread 
pool to make the "sync" -> "async", instead, we expose the async interface 
as-is.
# Since I wrote the original code, I propose that I post a patch with the idea 
that I am talking about so that we can see what I am proposing. Please let me 
know if you have any concerns or questions.


> Ozone: Support asynchronus client API for SCM and containers
> 
>
> Key: HDFS-11580
> URL: https://issues.apache.org/jira/browse/HDFS-11580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yiqun Lin
> Attachments: HDFS-11580-HDFS-7240.001.patch, 
> HDFS-11580-HDFS-7240.002.patch, HDFS-11580-HDFS-7240.003.patch, 
> HDFS-11580-HDFS-7240.004.patch, HDFS-11580-HDFS-7240.005.patch, 
> HDFS-11580-HDFS-7240.006.patch, HDFS-11580-HDFS-7240.007.patch, 
> HDFS-11580-HDFS-7240.008.patch, HDFS-11580-HDFS-7240.009.patch, 
> HDFS-11580-HDFS-7240.010.patch, HDFS-11580-HDFS-7240.011.patch
>
>
> This is an umbrella JIRA that needs to support a set of APIs in Asynchronous 
> form.
> For containers -- or the datanode API currently supports a call 
> {{sendCommand}}. we need to build proper programming interface and support an 
> async interface.
> There is also a set of SCM API that clients can call, it would be nice to 
> support Async interface for those too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12234) [SPS] Allow setting Xattr without SPS running.

2017-07-31 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12234:


 Summary: [SPS] Allow setting Xattr without SPS running.
 Key: HDFS-12234
 URL: https://issues.apache.org/jira/browse/HDFS-12234
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-10285
Reporter: Lei (Eddy) Xu


As discussed in HDFS-10285, if this API is widely used by downstream projects 
(i.e., HBase), it should allow the client to call this API without querying the 
running status of SPS service. It would introduce great burden for this API to 
be used.  

Given the constraints this SPS service has (i.e., can not run with Mover , and 
might be disabled by default), it should allow the API call success as long as 
related xattr being persisted. SPS can run later to catch on.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12233) [SPS] Add API to unset SPS on a path

2017-07-31 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12233:
-
Summary: [SPS] Add API to unset SPS on a path  (was: Add API to unset SPS 
on a path)

> [SPS] Add API to unset SPS on a path
> 
>
> Key: HDFS-12233
> URL: https://issues.apache.org/jira/browse/HDFS-12233
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Lei (Eddy) Xu
>
> As discussed in HDFS-10285, we should allow to unset SPS on a path.
> For example, an user might mistakenly set SPS on "/", and triggers 
> significant amount of data movement. Unset SPS will allow user to fix his own 
> mistake.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12049) Recommissioning live nodes stalls the NN

2017-07-31 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12049:
--
Target Version/s: 2.9.0  (was: 2.8.2)

> Recommissioning live nodes stalls the NN
> 
>
> Key: HDFS-12049
> URL: https://issues.apache.org/jira/browse/HDFS-12049
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> A node refresh will recommission included nodes that are alive and in 
> decommissioning or decommissioned state.  The recommission will scan all 
> blocks on the node, find over replicated blocks, chose an excess, queue an 
> invalidate.
> The process is expensive and worsened by overhead of storage types (even when 
> not in use).  It can be especially devastating because the write lock is held 
> for the entire node refresh.  _Recommissioning 67 nodes with ~500k 
> blocks/node stalled rpc services for over 4 mins._



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12049) Recommissioning live nodes stalls the NN

2017-07-31 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108222#comment-16108222
 ] 

Junping Du commented on HDFS-12049:
---

Moved. Thanks Daryn.

> Recommissioning live nodes stalls the NN
> 
>
> Key: HDFS-12049
> URL: https://issues.apache.org/jira/browse/HDFS-12049
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> A node refresh will recommission included nodes that are alive and in 
> decommissioning or decommissioned state.  The recommission will scan all 
> blocks on the node, find over replicated blocks, chose an excess, queue an 
> invalidate.
> The process is expensive and worsened by overhead of storage types (even when 
> not in use).  It can be especially devastating because the write lock is held 
> for the entire node refresh.  _Recommissioning 67 nodes with ~500k 
> blocks/node stalled rpc services for over 4 mins._



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12233) Add API to unset SPS on a path

2017-07-31 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12233:


 Summary: Add API to unset SPS on a path
 Key: HDFS-12233
 URL: https://issues.apache.org/jira/browse/HDFS-12233
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Affects Versions: HDFS-10285
Reporter: Lei (Eddy) Xu


As discussed in HDFS-10285, we should allow to unset SPS on a path.

For example, an user might mistakenly set SPS on "/", and triggers significant 
amount of data movement. Unset SPS will allow user to fix his own mistake.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-07-31 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108218#comment-16108218
 ] 

Lei (Eddy) Xu commented on HDFS-10285:
--

Thanks for the reply, [~umamaheswararao]!

bq. If you feel things can be done even after merge, 

Yea, None of the above is blocking the merge, I will file related JIRAs under 
HDFS-12226.

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12217) HDFS snapshots doesn't capture all open files when one of the open files is deleted

2017-07-31 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-12217:
--
Attachment: HDFS-12217.04.patch

Thanks for the quick review [~jojochuang]. Attached v04 patch to address the 
following. Can you please take a look at the latest patch.

bq. LeaseManager#getINodeWithLeases() seems to be a test-only method. If so, it 
doesn't need public modifier (package private is sufficient) and we can also 
add a {{@VisibleForTesting }} annotation.
Done.

bq. On a separate note, and this is totally unrelated to this patch. It looks 
like LeaseManager#addLease assumes the inodeId is an id for an INodeFile, which 
makes perfect sense. 
Will take this up in a new patch as there are many callers that need to get 
refactored for this,.

bq. Note that in your patch, DirectorySnapshottableFeature#addSnapshot would 
capture exception, but doesn't log the exception. I also think that in addition 
to the snapshot name, you should print the snapshot root path for the error 
message.
Done.
-- Exception is already logged at LeaseManager#getINodeWithLeases.
-- Now added exception details to the SnapshotException message also.


> HDFS snapshots doesn't capture all open files when one of the open files is 
> deleted
> ---
>
> Key: HDFS-12217
> URL: https://issues.apache.org/jira/browse/HDFS-12217
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12217.01.patch, HDFS-12217.02.patch, 
> HDFS-12217.03.patch, HDFS-12217.04.patch
>
>
> With the fix for HDFS-11402, HDFS Snapshots can additionally capture all the 
> open files. Just like all other files, these open files in the snapshots will 
> remain immutable. But, sometimes it is found that snapshots fail to capture 
> all the open files in the system.
> Under the following conditions, LeaseManager will fail to find INode 
> corresponding to an active lease 
> * a file is opened for writing (LeaseManager allots a lease), and
> * the same file is deleted while it is still open for writing and having 
> active lease, and
> * the same file is not referenced in any other Snapshots/Trash
> {{INode[] LeaseManager#getINodesWithLease()}} can thus return null for few 
> leases there by causing the caller to trip over and not return all the open 
> files needed by the snapshot manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11948) Ozone: change TestRatisManager to check cluster with data

2017-07-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-11948:
---
Attachment: HDFS-11948-HDFS-7240.20170731.patch

> Ozone: change TestRatisManager to check cluster with data
> -
>
> Key: HDFS-11948
> URL: https://issues.apache.org/jira/browse/HDFS-11948
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: HDFS-11948-HDFS-7240.20170614.patch, 
> HDFS-11948-HDFS-7240.20170731.patch
>
>
> TestRatisManager first creates multiple Ratis clusters.  Then it changes the 
> membership and closes some clusters.  However, it does not test the clusters 
> with data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12228) [SPS]: Add storage policy satisfier related metrics

2017-07-31 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108212#comment-16108212
 ] 

Uma Maheswara Rao G commented on HDFS-12228:


Please consider [~eddyxu] suggestions below.
{quote}
And since this call essentially triggers a large async background task, should 
we put some logs here? Similarly, it'd be nice to have related JMX stats and 
some indications in web UI, to be easier to integrate with other systems.
{quote}

I think logs can be improved even before merge. [~eddyxu], could you point us 
where you looking to ad more logging? 

> [SPS]: Add storage policy satisfier related metrics
> ---
>
> Key: HDFS-12228
> URL: https://issues.apache.org/jira/browse/HDFS-12228
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>
> This jira to discuss and implement metrics needed for SPS feature.
> Below are few metrics:
> # count of {{inprogress}} block movements
> # count of {{successful}} block movements
> # count of {{failed}} block movements
> Need to analyse and add more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11948) Ozone: change TestRatisManager to check cluster with data

2017-07-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-11948:
---
Attachment: (was: HDFS-11948-HDFS-7240.20170731.patch)

> Ozone: change TestRatisManager to check cluster with data
> -
>
> Key: HDFS-11948
> URL: https://issues.apache.org/jira/browse/HDFS-11948
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: HDFS-11948-HDFS-7240.20170614.patch
>
>
> TestRatisManager first creates multiple Ratis clusters.  Then it changes the 
> membership and closes some clusters.  However, it does not test the clusters 
> with data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-07-31 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108210#comment-16108210
 ] 

Uma Maheswara Rao G commented on HDFS-10285:


Hi [~eddyxu], Thank you for the review.
Here are my replies.

{quote}
Non-recursively set xattr. Please kindly re-consider to use recursive async 
call. If the use cases are mostly targeted to the downstream projects like 
HBase and etc., the chance of these projects mistakenly call 
satisfyStoragePolicy on wrong directory (i.e., "/") is rare, but it will make 
the projects to manage large / deep namespace difficult, i.e., hbase needs to 
iterate the namespace itself and calls the same amount of "setXattr" anyway 
(because the # of files to move is the same). Similar to "rm -rf /", while it 
is bad that "rm" allows to do it, but IMO it should not prevent users / 
applications to use "rm -rf" in a sensible way.
{quote}
Thank you for providing feedback and exposing pinpoints from user stand point 
of view. As this moment, seems like recursive is more helpful think to consider 
from the feedbacks, by Andrew and you. We will work on this item. 

{quote}
The newly added public void removeXattr(long id, String xattrName). While its 
name seems very generic, it seems only allow taking sps xattr as legit 
parameter. Should we demote it from public API in Namesystem?
{quote}
This was intentional. Since Namesystem is generic interface between BM and 
FSNamesystem, API name can be more generic incase if thats useful for other 
purposes. Means any Xattrs you can pass to this API to remove it. It may not be 
good to add more specific APIs to it. 

{quote}
Would it make sense to have an admin command to unset SPS on a path? For an 
user to undo his own mistake.
{quote}
Make sense to consider it. Would you mind to file a JIRA under HDFS-12226 ?

{quote}
FSNamesystem#satisfyStoragePolicy. Is this only setting xattr? Can we do the 
setting xattr part without SPS running? I was thinking the scenarios that: some 
downstream projects (i.e., hbase) start to routinely use this API, while for 
some reason (i.e., mover is running or cluster misconfiguration), SPS is not 
running, should we still allow these projects to successfully call the 
satisfyStoragePolicy(), and allow SPS to catch up later on?
{quote}
Interesting point. Worth filing a JIRA for more discussion on this? There could 
be some risk:  who will clean that Xattr incase, if admin is never enabling 
SPS.  May be we should bring, self expiry or something like that. We have 
created followup JIRA, which is intend improve the feature even after merging 
into trunk. If you feel things can be done even after merge, please file under 
HDFS-12226

{quote}
And since this call essentially triggers a large async background task, should 
we put some logs here? Similarly, it'd be nice to have related JMX stats and 
some indications in web UI, to be easier to integrate with other systems.
{quote}
Good suggestions. I will add this comment under metrics JIRA. HDFS-12228 to 
track.

Thank you helping on reviews

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in 

[jira] [Updated] (HDFS-11948) Ozone: change TestRatisManager to check cluster with data

2017-07-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-11948:
---
Attachment: HDFS-11948-HDFS-7240.20170731.patch

HDFS-11948-HDFS-7240.20170731.patch: fixes test failures.

Note that this patch requires RATIS-96 and RATIS-97.

> Ozone: change TestRatisManager to check cluster with data
> -
>
> Key: HDFS-11948
> URL: https://issues.apache.org/jira/browse/HDFS-11948
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: HDFS-11948-HDFS-7240.20170614.patch, 
> HDFS-11948-HDFS-7240.20170731.patch
>
>
> TestRatisManager first creates multiple Ratis clusters.  Then it changes the 
> membership and closes some clusters.  However, it does not test the clusters 
> with data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11984) Ozone: Ensures listKey lists all required key fields

2017-07-31 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108206#comment-16108206
 ] 

Weiwei Yang commented on HDFS-11984:


Oops, v3 patch doesn't apply any more, could you please submit a new patch for 
this? Thanks [~linyiqun].

> Ozone: Ensures listKey lists all required key fields
> 
>
> Key: HDFS-11984
> URL: https://issues.apache.org/jira/browse/HDFS-11984
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yiqun Lin
> Attachments: HDFS-11984-HDFS-7240.001.patch, 
> HDFS-11984-HDFS-7240.002.patch, HDFS-11984-HDFS-7240.003.patch
>
>
> HDFS-11782 implements the listKey operation which only lists the basic key 
> fields, we need to make sure it return all required fields
> # version
> # md5hash
> # createdOn
> # size
> # keyName
> this task is depending on the work of HDFS-11886. See more discussion [here | 
> https://issues.apache.org/jira/browse/HDFS-11782?focusedCommentId=16045562=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16045562].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12072) Provide fairness between EC and non-EC recovery tasks.

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108203#comment-16108203
 ] 

Hadoop QA commented on HDFS-12072:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 74 unchanged - 0 fixed = 77 total (was 74) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling 
|
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDecommissionWithStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12072 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879737/HDFS-12072.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 74b6334de6ab 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2be9412 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20506/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20506/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20506/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20506/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Issue Comment Deleted] (HDFS-12154) Incorrect javadoc description in StorageLocationChecker#check

2017-07-31 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12154:
---
Comment: was deleted

(was: +1, pending on jenkins. Thanks [~nandakumar131] to fix this.)

> Incorrect javadoc description in StorageLocationChecker#check
> -
>
> Key: HDFS-12154
> URL: https://issues.apache.org/jira/browse/HDFS-12154
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Nandakumar
>Assignee: Nandakumar
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12154.000.patch
>
>
> {{StorageLocationChecker#check}} returns list of healthy volumes, but javadoc 
> states that it returns failed volumes.
> {code}
> /**
>* Initiate a check of the supplied storage volumes and return
>* a list of failed volumes.
>*
>* StorageLocations are returned in the same order as the input
>* for compatibility with existing unit tests.
>*
>* @param conf HDFS configuration.
>* @param dataDirs list of volumes to check.
>* @return returns a list of failed volumes. Returns the empty list if
>* there are no failed volumes.
>*
>* @throws InterruptedException if the check was interrupted.
>* @throws IOException if the number of failed volumes exceeds the
>* maximum allowed or if there are no good
>* volumes.
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-12154) Incorrect javadoc description in StorageLocationChecker#check

2017-07-31 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12154:
---
Comment: was deleted

(was: Looks good to me, +1, committing now.)

> Incorrect javadoc description in StorageLocationChecker#check
> -
>
> Key: HDFS-12154
> URL: https://issues.apache.org/jira/browse/HDFS-12154
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Nandakumar
>Assignee: Nandakumar
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12154.000.patch
>
>
> {{StorageLocationChecker#check}} returns list of healthy volumes, but javadoc 
> states that it returns failed volumes.
> {code}
> /**
>* Initiate a check of the supplied storage volumes and return
>* a list of failed volumes.
>*
>* StorageLocations are returned in the same order as the input
>* for compatibility with existing unit tests.
>*
>* @param conf HDFS configuration.
>* @param dataDirs list of volumes to check.
>* @return returns a list of failed volumes. Returns the empty list if
>* there are no failed volumes.
>*
>* @throws InterruptedException if the check was interrupted.
>* @throws IOException if the number of failed volumes exceeds the
>* maximum allowed or if there are no good
>* volumes.
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11984) Ozone: Ensures listKey lists all required key fields

2017-07-31 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108202#comment-16108202
 ] 

Weiwei Yang commented on HDFS-11984:


Thanks [~linyiqun] for resolving the remaining issues, looks good to me now. I 
will commit this shortly.

> Ozone: Ensures listKey lists all required key fields
> 
>
> Key: HDFS-11984
> URL: https://issues.apache.org/jira/browse/HDFS-11984
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yiqun Lin
> Attachments: HDFS-11984-HDFS-7240.001.patch, 
> HDFS-11984-HDFS-7240.002.patch, HDFS-11984-HDFS-7240.003.patch
>
>
> HDFS-11782 implements the listKey operation which only lists the basic key 
> fields, we need to make sure it return all required fields
> # version
> # md5hash
> # createdOn
> # size
> # keyName
> this task is depending on the work of HDFS-11886. See more discussion [here | 
> https://issues.apache.org/jira/browse/HDFS-11782?focusedCommentId=16045562=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16045562].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12154) Incorrect javadoc description in StorageLocationChecker#check

2017-07-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108198#comment-16108198
 ] 

Hudson commented on HDFS-12154:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12084 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12084/])
HDFS-12154. Incorrect javadoc description in (arp: rev 
ea568123fa76e4683d355a67be01b730d0c11068)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/StorageLocationChecker.java


> Incorrect javadoc description in StorageLocationChecker#check
> -
>
> Key: HDFS-12154
> URL: https://issues.apache.org/jira/browse/HDFS-12154
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Nandakumar
>Assignee: Nandakumar
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12154.000.patch
>
>
> {{StorageLocationChecker#check}} returns list of healthy volumes, but javadoc 
> states that it returns failed volumes.
> {code}
> /**
>* Initiate a check of the supplied storage volumes and return
>* a list of failed volumes.
>*
>* StorageLocations are returned in the same order as the input
>* for compatibility with existing unit tests.
>*
>* @param conf HDFS configuration.
>* @param dataDirs list of volumes to check.
>* @return returns a list of failed volumes. Returns the empty list if
>* there are no failed volumes.
>*
>* @throws InterruptedException if the check was interrupted.
>* @throws IOException if the number of failed volumes exceeds the
>* maximum allowed or if there are no good
>* volumes.
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12209) VolumeScanner scan cursor not save periodic

2017-07-31 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108184#comment-16108184
 ] 

Chen Liang commented on HDFS-12209:
---

Thanks [~fatkun] for reporting this! Although I think it should be slightly 
better to make the change the other way, i.e. change the {{Time.now()}} to 
{{Time.monotonicNow()}} in {{FsVolumeImpl}}.

> VolumeScanner scan cursor not save periodic
> ---
>
> Key: HDFS-12209
> URL: https://issues.apache.org/jira/browse/HDFS-12209
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.0
> Environment: cdh5.4.0
>Reporter: fatkun
> Attachments: HDFS-12209.patch
>
>
> The bug introduce from HDFS-7430 , the time is not same, one is monotonicMs 
> and other is clock time. It should use Time.now() both
> VolumeScanner.java
> {code:java}
> long saveDelta = monotonicMs - curBlockIter.getLastSavedMs();
> if (saveDelta >= conf.cursorSaveMs) {
>   LOG.debug("{}: saving block iterator {} after {} ms.",
>   this, curBlockIter, saveDelta);
>   saveBlockIterator(curBlockIter);
> }
> {code}
> curBlockIter.getLastSavedMs() init here
> FsVolumeImpl.java
> {code:java}
> BlockIteratorState() {
>   lastSavedMs = iterStartMs = Time.now();
>   curFinalizedDir = null;
>   curFinalizedSubDir = null;
>   curEntry = null;
>   atEnd = false;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12154) Incorrect javadoc description in StorageLocationChecker#check

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12154:
-
Fix Version/s: (was: HDFS-7240)
   3.0.0-beta1
   2.9.0

Hi [~nanda], thanks for catching this.

Committed to trunk and branch-2 since the same issue exists in those branches.

> Incorrect javadoc description in StorageLocationChecker#check
> -
>
> Key: HDFS-12154
> URL: https://issues.apache.org/jira/browse/HDFS-12154
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12154.000.patch
>
>
> {{StorageLocationChecker#check}} returns list of healthy volumes, but javadoc 
> states that it returns failed volumes.
> {code}
> /**
>* Initiate a check of the supplied storage volumes and return
>* a list of failed volumes.
>*
>* StorageLocations are returned in the same order as the input
>* for compatibility with existing unit tests.
>*
>* @param conf HDFS configuration.
>* @param dataDirs list of volumes to check.
>* @return returns a list of failed volumes. Returns the empty list if
>* there are no failed volumes.
>*
>* @throws InterruptedException if the check was interrupted.
>* @throws IOException if the number of failed volumes exceeds the
>* maximum allowed or if there are no good
>* volumes.
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12154) Incorrect javadoc description in StorageLocationChecker#check

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12154:
-
Priority: Major  (was: Trivial)

> Incorrect javadoc description in StorageLocationChecker#check
> -
>
> Key: HDFS-12154
> URL: https://issues.apache.org/jira/browse/HDFS-12154
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Nandakumar
>Assignee: Nandakumar
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12154.000.patch
>
>
> {{StorageLocationChecker#check}} returns list of healthy volumes, but javadoc 
> states that it returns failed volumes.
> {code}
> /**
>* Initiate a check of the supplied storage volumes and return
>* a list of failed volumes.
>*
>* StorageLocations are returned in the same order as the input
>* for compatibility with existing unit tests.
>*
>* @param conf HDFS configuration.
>* @param dataDirs list of volumes to check.
>* @return returns a list of failed volumes. Returns the empty list if
>* there are no failed volumes.
>*
>* @throws InterruptedException if the check was interrupted.
>* @throws IOException if the number of failed volumes exceeds the
>* maximum allowed or if there are no good
>* volumes.
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-07-31 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108182#comment-16108182
 ] 

Uma Maheswara Rao G commented on HDFS-10285:


Hi [~andrew.wang], thank you so much for the thorough review.
Please find my replies below.

{quote}
For the automatic usecase, I agree that metrics are probably the best we can 
do. However, the API exposed here is for interactive usecases (e.g. a user 
calling the shell command and polling until it's done). I think we need to do 
more here to expose the status.
Even for the HBase usecase, it'd still want to know about satisfier status so 
it can bubble it up to an HBase admin.
{quote}
We have filed JIRA for this already HDFS-12228. 
Sure, we will think more about status reporting part. Anyway I will file a 
ticket for this as well to track. Now quick question on your example above “a 
user calling the shell command and polling until it's done” , you mean command 
should blocked by polling internally? or user will call status check 
periodically?  How much time server should hold the status?

{quote}
Can this be addressed by throttling? I think the SPS operations aren't too 
different from decommissioning, since they're both doing block placement and 
tracking data movement, and the decom throttles work okay.
We've also encountered directories with millions of files before, so there's a 
need for throttles anyway. Maybe we can do something generic here that can be 
shared with HDFS-10899.

Re-encryption will be faster than SPS, but it's not fast since it needs to talk 
to the KMS. Xiao's benchmarks indicate that a re-encrypt operation will likely 
run for hours. On the upside, the benchmarks also show that scanning through an 
already-re-encrypted zone is quite fast (seconds). I expect it'll be similarly 
fast for SPS if a user submits subdir or duplicate requests. Would be good to 
benchmark this.
I also don't understand the aversion to FIFO execution. It reduces code 
complexity and is easy for admins to reason about. If we want to do something 
more fancy, there should be a broader question around the API for resource 
management. Is it fair share, priorities, limits, some combination? What are 
these applied to (users, files, directories, queues with ACLs)?
{quote}

Throttling is one of the task we have filed HDFS-12227 already. But thats 
focussing on DN level throttling level and will add to track it to consider NN 
throttling as well.
I think as of now, FIFO model is one way to go ahead, each dir root can be main 
element to pick first and sub dir will get eventually next priority if user 
calls on sub directory while higher directory already in progress. 

{quote}
What's the total SPS work timeout in minutes? The node is declared dead after 
10.5 minutes, but if the network partition is shorter than that, it won't need 
to re-register. 5 mins also seems kind of long for an IN_PROGRESS update, since 
it should take a few seconds for each block movement.
Also, we can't depend on re-registration with NN for fencing the old C-DN, 
since there could be a network partition that is just between the NN and old 
C-DN, and the old C-DN can still talk to other DNs. I don't know how this 
affects correctness, but having multiple C-DNs makes debugging harder.
{quote}
 Even though old C-DN working with other DN to transfer blocks(scenario could 
be rare), DNs will allow only one block. Whoever transfer block first that DN 
will win, other wll get Block already exist exception. Since NN is tracking 
that file associated block, it has to remove its tracking element. Example: In 
worst case, old c-DN completed all movement successfully. New C-DN attempts 
will fail. Then NN will get result as failure from new C-DN. Now NN will retry, 
this time blocks would have been satisfied, since old C-DN already did. So, NN 
will simply ignore that and remove xattr as finished. IN-PROGRESS we send to 
indicate NN that DN is working on it. This should be very rare condition, as DN 
will transfer blocks faster than that. This is to make sure DN is running.  
Right now a file element will be retried after self retry timeout. This case 
only for failure case where C-DN is not reported anything at all (Dead, or out 
of network). Also it will happen to that files which are assigned to that C-DN. 
Right now, self retry timeout was configured 20mins, can be tuned to >10mins. [ 
We have just made this configurations like PendingReplicationMonitor, where it 
will reassign to LowReconstructionBlocks list approximately with in 10mins]. 

{quote}
Even assuming we do the xattr optimization, I believe the NN still has a queue 
of pending work items so they can be retried if the C-DNs fail. How many items 
might be in this queue, for a large SPS request? Is it throttled?
{quote}
Pending queue will be depending on number of files which were failed to move by 
C-DN. Queue contains InodeIds of file, 

[jira] [Commented] (HDFS-12072) Provide fairness between EC and non-EC recovery tasks.

2017-07-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108165#comment-16108165
 ] 

Andrew Wang commented on HDFS-12072:


Thanks for working on this Eddy! A few comments from looking at the patch:

* Prefer if we rename instances of "ReplicateTask" to "ReplicationTask"
* One idea to avoid polling twice, since we know the size of each queue 
(getNumberOfBlocksToBeReplicated and getNumberOfBlocksToBeErasureCoded), we can 
compute a ratio and then multiply.
* Adding a slightly longer explanatory comment would also be good
* Nit: There's also an unused import in the test which will probably be flagged 
in checkstyle.

> Provide fairness between EC and non-EC recovery tasks.
> --
>
> Key: HDFS-12072
> URL: https://issues.apache.org/jira/browse/HDFS-12072
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12072.00.patch
>
>
> In {{DatanodeManager#handleHeartbeat}}, it takes up to {{maxTransfer}} 
> reconstruction tasks for non-EC, then if the request can not be full filled, 
> it takes more tasks from EC reconstruction tasks.
> {code}
> List pendingList = nodeinfo.getReplicationCommand(
> maxTransfers);
> if (pendingList != null) {
>   cmds.add(new BlockCommand(DatanodeProtocol.DNA_TRANSFER, blockPoolId,
>   pendingList));
>   maxTransfers -= pendingList.size();
> }
> // check pending erasure coding tasks
> List pendingECList = nodeinfo
> .getErasureCodeCommand(maxTransfers);
> if (pendingECList != null) {
>   cmds.add(new BlockECReconstructionCommand(
>   DNA_ERASURE_CODING_RECONSTRUCTION, pendingECList));
> }
> {code}
> So on a large cluster, if there are large number of constantly non-EC 
> reconstruction tasks, EC reconstruction tasks do not have a chance to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12217) HDFS snapshots doesn't capture all open files when one of the open files is deleted

2017-07-31 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108147#comment-16108147
 ] 

Wei-Chiu Chuang commented on HDFS-12217:


Thanks [~manojg] [~yzhangal].

One nit I found is that {{LeaseManager#getINodeWithLeases()}} seems to be a 
test-only method. If so, it doesn't need public modifier (package private is 
sufficient) and we can also add a {{@VisibleForTesting }} annotation.

On a separate note, and this is totally unrelated to this patch. It looks like 
{{LeaseManager#addLease}} assumes the inodeId is an id for an INodeFile, which 
makes perfect sense. The member variable {{leasesById}} also implicitly makes 
the assumption that the id is for an INodeFile. However, there's no assertion 
to ensure it is the case in the future. I wonder if it makes sense to add a new 
{{addLease(String, INodeFile)}} method to make sure only INodeFile is passed 
in. So it would go like:

{code}
Lease addLease(String holder, INodeFile inodeFile) {
  addLease(holder, inodeFile.getId());
}
private synchronized Lease addLease(String holder, long inodeId) {
 ...
}
{code}

Note that in your patch, {{DirectorySnapshottableFeature#addSnapshot}} would 
capture exception, but doesn't log the exception. I also think that in addition 
to the snapshot name, you should print the snapshot root path for the error 
message.

> HDFS snapshots doesn't capture all open files when one of the open files is 
> deleted
> ---
>
> Key: HDFS-12217
> URL: https://issues.apache.org/jira/browse/HDFS-12217
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12217.01.patch, HDFS-12217.02.patch, 
> HDFS-12217.03.patch
>
>
> With the fix for HDFS-11402, HDFS Snapshots can additionally capture all the 
> open files. Just like all other files, these open files in the snapshots will 
> remain immutable. But, sometimes it is found that snapshots fail to capture 
> all the open files in the system.
> Under the following conditions, LeaseManager will fail to find INode 
> corresponding to an active lease 
> * a file is opened for writing (LeaseManager allots a lease), and
> * the same file is deleted while it is still open for writing and having 
> active lease, and
> * the same file is not referenced in any other Snapshots/Trash
> {{INode[] LeaseManager#getINodesWithLease()}} can thus return null for few 
> leases there by causing the caller to trip over and not return all the open 
> files needed by the snapshot manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-11318) HDFS FSNamesystem LeaseManager.findPath BLOCK ALL FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal deleted HDFS-11318:
-


> HDFS FSNamesystem LeaseManager.findPath BLOCK ALL FSNamesystem Ops
> --
>
> Key: HDFS-11318
> URL: https://issues.apache.org/jira/browse/HDFS-11318
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: CentOS6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>Priority: Critical
>
> ```
> "IPC Server handler 69 on 8021" daemon prio=10 tid=0x7f0714c59000 
> nid=0x17a23 runnable [0x7eee3ec2f000]
>java.lang.Thread.State: RUNNABLE
> at org.apache.hadoop.hdfs.server.namenode.INode.compareTo(INode.java:641)
> at org.apache.hadoop.hdfs.server.namenode.INode.compareTo(INode.java:52)
> at 
> org.apache.hadoop.hdfs.util.ReadOnlyList$Util.binarySearch(ReadOnlyList.java:73)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getChild(INodeDirectory.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath.resolve(INodesInPath.java:216)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getLastINodeInPath(INodeDirectory.java:330)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getLastINodeInPath(FSDirectory.java:1655)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINode(FSDirectory.java:1645)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Lease.findPath(LeaseManager.java:259)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Lease.access$300(LeaseManager.java:228)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.findPath(LeaseManager.java:189)
> - locked <0x7ef67f8fe698> (a 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.closeFileCommitBlocks(FSNamesystem.java:4020)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.commitBlockSynchronization(FSNamesystem.java:3989)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.commitBlockSynchronization(NameNodeRpcServer.java:647)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.commitBlockSynchronization(DatanodeProtocolServerSideTranslatorPB.java:241)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:24093)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
> ```



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11324) FSNamesystem LeaseManager findPath block FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108145#comment-16108145
 ] 

Arpit Agarwal commented on HDFS-11324:
--

Deleted a bunch of unresolvable duplicates of this issue.

[~piaoyu zhang] please try to avoid spamming Jira with multiple duplicates of 
each bug.

> FSNamesystem LeaseManager findPath block FSNamesystem Ops
> -
>
> Key: HDFS-11324
> URL: https://issues.apache.org/jira/browse/HDFS-11324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
> Environment: CentOS6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>Priority: Critical
>  Labels: HDFS
>
> LeaseManager findPath Cause all FSNamesystem Ops wait



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-11322) FSNamesystem LeaseManager findPath BLOCK ALL FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal deleted HDFS-11322:
-


> FSNamesystem LeaseManager findPath BLOCK ALL FSNamesystem Ops
> -
>
> Key: HDFS-11322
> URL: https://issues.apache.org/jira/browse/HDFS-11322
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: CentOS6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>Priority: Critical
>
> LeaseManager findPath Cause all FSNamesystem Ops wait



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-11321) FSNamesystem LeaseManager findPath BLOCK ALL FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal deleted HDFS-11321:
-


> FSNamesystem LeaseManager findPath BLOCK ALL FSNamesystem Ops
> -
>
> Key: HDFS-11321
> URL: https://issues.apache.org/jira/browse/HDFS-11321
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: CentOS6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>Priority: Critical
>
> "IPC Server handler 69 on 8021" daemon prio=10 tid=0x7f0714c59000 
> nid=0x17a23 runnable [0x7eee3ec2f000]
>java.lang.Thread.State: RUNNABLE
> at org.apache.hadoop.hdfs.server.namenode.INode.compareTo(INode.java:641)
> at org.apache.hadoop.hdfs.server.namenode.INode.compareTo(INode.java:52)
> at 
> org.apache.hadoop.hdfs.util.ReadOnlyList$Util.binarySearch(ReadOnlyList.java:73)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getChild(INodeDirectory.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath.resolve(INodesInPath.java:216)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getLastINodeInPath(INodeDirectory.java:330)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getLastINodeInPath(FSDirectory.java:1655)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINode(FSDirectory.java:1645)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Lease.findPath(LeaseManager.java:259)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Lease.access$300(LeaseManager.java:228)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.findPath(LeaseManager.java:189)
> - locked <0x7ef67f8fe698> (a 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.closeFileCommitBlocks(FSNamesystem.java:4020)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.commitBlockSynchronization(FSNamesystem.java:3989)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.commitBlockSynchronization(NameNodeRpcServer.java:647)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.commitBlockSynchronization(DatanodeProtocolServerSideTranslatorPB.java:241)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:24093)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-11323) FSNamesystem LeaseManager findPath block FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal deleted HDFS-11323:
-


> FSNamesystem LeaseManager findPath block FSNamesystem Ops
> -
>
> Key: HDFS-11323
> URL: https://issues.apache.org/jira/browse/HDFS-11323
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: CentOS6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>Priority: Critical
>
> LeaseManager findPath Cause all FSNamesystem Ops wait



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-11320) FSNamesystem LeaseManager.findPath BLOCK ALL FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal deleted HDFS-11320:
-


> FSNamesystem LeaseManager.findPath BLOCK ALL FSNamesystem Ops
> -
>
> Key: HDFS-11320
> URL: https://issues.apache.org/jira/browse/HDFS-11320
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: CentOS6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>Priority: Critical
>
> "IPC Server handler 69 on 8021" daemon prio=10 tid=0x7f0714c59000 
> nid=0x17a23 runnable [0x7eee3ec2f000]
>java.lang.Thread.State: RUNNABLE
> at org.apache.hadoop.hdfs.server.namenode.INode.compareTo(INode.java:641)
> at org.apache.hadoop.hdfs.server.namenode.INode.compareTo(INode.java:52)
> at 
> org.apache.hadoop.hdfs.util.ReadOnlyList$Util.binarySearch(ReadOnlyList.java:73)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getChild(INodeDirectory.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath.resolve(INodesInPath.java:216)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getLastINodeInPath(INodeDirectory.java:330)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getLastINodeInPath(FSDirectory.java:1655)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINode(FSDirectory.java:1645)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Lease.findPath(LeaseManager.java:259)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Lease.access$300(LeaseManager.java:228)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.findPath(LeaseManager.java:189)
> - locked <0x7ef67f8fe698> (a 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.closeFileCommitBlocks(FSNamesystem.java:4020)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.commitBlockSynchronization(FSNamesystem.java:3989)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.commitBlockSynchronization(NameNodeRpcServer.java:647)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.commitBlockSynchronization(DatanodeProtocolServerSideTranslatorPB.java:241)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:24093)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-11319) HDFS FSNamesystem LeaseManager.findPath BLOCK ALL FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal deleted HDFS-11319:
-


> HDFS FSNamesystem LeaseManager.findPath BLOCK ALL FSNamesystem Ops
> --
>
> Key: HDFS-11319
> URL: https://issues.apache.org/jira/browse/HDFS-11319
> Project: Hadoop HDFS
>  Issue Type: Sub-task
> Environment: CentOS6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>Priority: Critical
>
> "IPC Server handler 69 on 8021" daemon prio=10 tid=0x7f0714c59000 
> nid=0x17a23 runnable [0x7eee3ec2f000]
>java.lang.Thread.State: RUNNABLE
> at org.apache.hadoop.hdfs.server.namenode.INode.compareTo(INode.java:641)
> at org.apache.hadoop.hdfs.server.namenode.INode.compareTo(INode.java:52)
> at 
> org.apache.hadoop.hdfs.util.ReadOnlyList$Util.binarySearch(ReadOnlyList.java:73)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getChild(INodeDirectory.java:323)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath.resolve(INodesInPath.java:216)
> at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.getLastINodeInPath(INodeDirectory.java:330)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getLastINodeInPath(FSDirectory.java:1655)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.getINode(FSDirectory.java:1645)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Lease.findPath(LeaseManager.java:259)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Lease.access$300(LeaseManager.java:228)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.findPath(LeaseManager.java:189)
> - locked <0x7ef67f8fe698> (a 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.closeFileCommitBlocks(FSNamesystem.java:4020)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.commitBlockSynchronization(FSNamesystem.java:3989)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.commitBlockSynchronization(NameNodeRpcServer.java:647)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.commitBlockSynchronization(DatanodeProtocolServerSideTranslatorPB.java:241)
> at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:24093)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11326) FSNamesystem closeFileCommitBlocks block FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108140#comment-16108140
 ] 

Arpit Agarwal commented on HDFS-11326:
--

Deleted unresolveable duplicates of this Jira HDFS-11325, 11329, 11330, 11331 
and 11332.

> FSNamesystem closeFileCommitBlocks block FSNamesystem Ops
> -
>
> Key: HDFS-11326
> URL: https://issues.apache.org/jira/browse/HDFS-11326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
> Environment: CentOS 6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>Priority: Critical
>
> Seems  like String src = leaseManager.findPath(pendingFile); cause to much 
> time hold the write lock.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-11331) FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal deleted HDFS-11331:
-


> FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops
> -
>
> Key: HDFS-11331
> URL: https://issues.apache.org/jira/browse/HDFS-11331
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: CentOS6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>
> FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops
> Seems like leaseManager.findPath(pendingFile) hold the write lock for long 
> time 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12196) Ozone: DeleteKey-2: Implement container recycling service to delete stale blocks at background

2017-07-31 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108138#comment-16108138
 ] 

Anu Engineer commented on HDFS-12196:
-

bq. That absolutely makes sense. I have already created an abstract class 
AbstractRecyclingService which is a base class for all Recycling Services

I was suggesting something little more ambitious. I was suggesting that we need 
to create an interface called Background tasks for KSM, SCM and Datanode. All 
tasks like Recycling task or volume scanner or garbage collection tasks for 
incomplete objects etc, Should run in that mode.

I do see that having a Recycling Interface is nice, so it is easy to see how 
KSM Recycling, SCM Recycling, and Datanode stuff all relate to each other. I 
would really love if they are all part of Background Task interface for Ozone.


> Ozone: DeleteKey-2: Implement container recycling service to delete stale 
> blocks at background
> --
>
> Key: HDFS-12196
> URL: https://issues.apache.org/jira/browse/HDFS-12196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12196-HDFS-7240.001.patch
>
>
> Implement a recycling service running on datanode to delete stale blocks.  
> The recycling service scans staled blocks for each container and delete 
> chunks and references periodically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-11325) FSNamesystem closeFileCommitBlocks block FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal deleted HDFS-11325:
-


> FSNamesystem closeFileCommitBlocks block FSNamesystem Ops
> -
>
> Key: HDFS-11325
> URL: https://issues.apache.org/jira/browse/HDFS-11325
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: CentOS 6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>Priority: Critical
>
> Seems  like String src = leaseManager.findPath(pendingFile); cause to much 
> time hold the write lock.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-11329) FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal deleted HDFS-11329:
-


> FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops
> -
>
> Key: HDFS-11329
> URL: https://issues.apache.org/jira/browse/HDFS-11329
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zhangyubiao
>Priority: Critical
>
> FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-11330) FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal deleted HDFS-11330:
-


> FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops
> -
>
> Key: HDFS-11330
> URL: https://issues.apache.org/jira/browse/HDFS-11330
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: CentOS 6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>Priority: Critical
>
> FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDFS-11332) FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal deleted HDFS-11332:
-


> FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops
> -
>
> Key: HDFS-11332
> URL: https://issues.apache.org/jira/browse/HDFS-11332
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: CentOS6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>
> FSNamesystem closeFileCommitBlocks block all FSNamesystem Ops
> Seems like leaseManager.findPath(pendingFile) hold the write lock for long 
> time 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12034) Ozone: Web interface for KSM

2017-07-31 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12034:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~elek] Thank you for the contribution. [~aw] Thanks for the help with 
rat-check and very valuable comments on the License issue. I have committed 
this to the feature branch


> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch, 
> HDFS-12034-HDFS-7240.002.patch, HDFS-12034-HDFS-7240.003.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11920) Ozone : add key partition

2017-07-31 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108125#comment-16108125
 ] 

Chen Liang commented on HDFS-11920:
---

Thanks [~xyao] for the comments! I will commit this shortly.

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch, 
> HDFS-11920-HDFS-7240.002.patch, HDFS-11920-HDFS-7240.003.patch, 
> HDFS-11920-HDFS-7240.004.patch, HDFS-11920-HDFS-7240.005.patch, 
> HDFS-11920-HDFS-7240.006.patch, HDFS-11920-HDFS-7240.007.patch, 
> HDFS-11920-HDFS-7240.008.patch, HDFS-11920-HDFS-7240.009.patch, 
> HDFS-11920-HDFS-7240.010.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11948) Ozone: change TestRatisManager to check cluster with data

2017-07-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-11948:
---
Status: Open  (was: Patch Available)

> Ozone: change TestRatisManager to check cluster with data
> -
>
> Key: HDFS-11948
> URL: https://issues.apache.org/jira/browse/HDFS-11948
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: HDFS-11948-HDFS-7240.20170614.patch
>
>
> TestRatisManager first creates multiple Ratis clusters.  Then it changes the 
> membership and closes some clusters.  However, it does not test the clusters 
> with data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11326) FSNamesystem closeFileCommitBlocks block FSNamesystem Ops

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11326:
-
Hadoop Flags:   (was: Incompatible change)

> FSNamesystem closeFileCommitBlocks block FSNamesystem Ops
> -
>
> Key: HDFS-11326
> URL: https://issues.apache.org/jira/browse/HDFS-11326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
> Environment: CentOS 6.5 Hadoop-2.2.0 
>Reporter: zhangyubiao
>Priority: Critical
>
> Seems  like String src = leaseManager.findPath(pendingFile); cause to much 
> time hold the write lock.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11920) Ozone : add key partition

2017-07-31 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-11920:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch, 
> HDFS-11920-HDFS-7240.002.patch, HDFS-11920-HDFS-7240.003.patch, 
> HDFS-11920-HDFS-7240.004.patch, HDFS-11920-HDFS-7240.005.patch, 
> HDFS-11920-HDFS-7240.006.patch, HDFS-11920-HDFS-7240.007.patch, 
> HDFS-11920-HDFS-7240.008.patch, HDFS-11920-HDFS-7240.009.patch, 
> HDFS-11920-HDFS-7240.010.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12072) Provide fairness between EC and non-EC recovery tasks.

2017-07-31 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12072:
-
Status: Patch Available  (was: Open)

> Provide fairness between EC and non-EC recovery tasks.
> --
>
> Key: HDFS-12072
> URL: https://issues.apache.org/jira/browse/HDFS-12072
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12072.00.patch
>
>
> In {{DatanodeManager#handleHeartbeat}}, it takes up to {{maxTransfer}} 
> reconstruction tasks for non-EC, then if the request can not be full filled, 
> it takes more tasks from EC reconstruction tasks.
> {code}
> List pendingList = nodeinfo.getReplicationCommand(
> maxTransfers);
> if (pendingList != null) {
>   cmds.add(new BlockCommand(DatanodeProtocol.DNA_TRANSFER, blockPoolId,
>   pendingList));
>   maxTransfers -= pendingList.size();
> }
> // check pending erasure coding tasks
> List pendingECList = nodeinfo
> .getErasureCodeCommand(maxTransfers);
> if (pendingECList != null) {
>   cmds.add(new BlockECReconstructionCommand(
>   DNA_ERASURE_CODING_RECONSTRUCTION, pendingECList));
> }
> {code}
> So on a large cluster, if there are large number of constantly non-EC 
> reconstruction tasks, EC reconstruction tasks do not have a chance to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12072) Provide fairness between EC and non-EC recovery tasks.

2017-07-31 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12072:
-
Attachment: HDFS-12072.00.patch

Update the patch to get half of {{maxTransfers}} tasks from each of 3x and EC 
pending recovery queue.

> Provide fairness between EC and non-EC recovery tasks.
> --
>
> Key: HDFS-12072
> URL: https://issues.apache.org/jira/browse/HDFS-12072
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12072.00.patch
>
>
> In {{DatanodeManager#handleHeartbeat}}, it takes up to {{maxTransfer}} 
> reconstruction tasks for non-EC, then if the request can not be full filled, 
> it takes more tasks from EC reconstruction tasks.
> {code}
> List pendingList = nodeinfo.getReplicationCommand(
> maxTransfers);
> if (pendingList != null) {
>   cmds.add(new BlockCommand(DatanodeProtocol.DNA_TRANSFER, blockPoolId,
>   pendingList));
>   maxTransfers -= pendingList.size();
> }
> // check pending erasure coding tasks
> List pendingECList = nodeinfo
> .getErasureCodeCommand(maxTransfers);
> if (pendingECList != null) {
>   cmds.add(new BlockECReconstructionCommand(
>   DNA_ERASURE_CODING_RECONSTRUCTION, pendingECList));
> }
> {code}
> So on a large cluster, if there are large number of constantly non-EC 
> reconstruction tasks, EC reconstruction tasks do not have a chance to run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11920) Ozone : add key partition

2017-07-31 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108115#comment-16108115
 ] 

Xiaoyu Yao commented on HDFS-11920:
---

Thanks [~vagarychen] for updating the patch. Agree that we can leave the delete 
part to the DeleteKey feature. 
+1 given the two Jenkins failures are known issues. 

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch, 
> HDFS-11920-HDFS-7240.002.patch, HDFS-11920-HDFS-7240.003.patch, 
> HDFS-11920-HDFS-7240.004.patch, HDFS-11920-HDFS-7240.005.patch, 
> HDFS-11920-HDFS-7240.006.patch, HDFS-11920-HDFS-7240.007.patch, 
> HDFS-11920-HDFS-7240.008.patch, HDFS-11920-HDFS-7240.009.patch, 
> HDFS-11920-HDFS-7240.010.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11920) Ozone : add key partition

2017-07-31 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108099#comment-16108099
 ] 

Chen Liang commented on HDFS-11920:
---

Failed tests are unrelated.

> Ozone : add key partition
> -
>
> Key: HDFS-11920
> URL: https://issues.apache.org/jira/browse/HDFS-11920
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11920-HDFS-7240.001.patch, 
> HDFS-11920-HDFS-7240.002.patch, HDFS-11920-HDFS-7240.003.patch, 
> HDFS-11920-HDFS-7240.004.patch, HDFS-11920-HDFS-7240.005.patch, 
> HDFS-11920-HDFS-7240.006.patch, HDFS-11920-HDFS-7240.007.patch, 
> HDFS-11920-HDFS-7240.008.patch, HDFS-11920-HDFS-7240.009.patch, 
> HDFS-11920-HDFS-7240.010.patch
>
>
> Currently, each key corresponds to one single SCM block, and putKey/getKey 
> writes/reads to this single SCM block. This works fine for keys with 
> reasonably small data size. However if the data is too huge, (e.g. not even 
> fits into a single container), then we need to be able to partition the key 
> data into multiple blocks, each in one container. This JIRA changes the 
> key-related classes to support this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-31 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108101#comment-16108101
 ] 

Anu Engineer commented on HDFS-12034:
-

[~elek] Thanks for the update. I am glad we were able to resolve this issue. 

[~aw] Thank you very much for helping out on both the build issues and 
especially the legal stuff. Really appreciate your comments and help here. I 
will commit this to feature branch shortly.


> Ozone: Web interface for KSM
> 
>
> Key: HDFS-12034
> URL: https://issues.apache.org/jira/browse/HDFS-12034
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12034-HDFS-7240.001.patch, 
> HDFS-12034-HDFS-7240.002.patch, HDFS-12034-HDFS-7240.003.patch
>
>
> This is the pair of the HDFS-12005 but it's about the web interface of the 
> Ozone KSM server. I created a seperated issue to collect the required 
> data/mxbean separated and handle the two web interface independent one by one.
> Required data (Work in progress):
> * KSMMetrics data (numVolumeCreates, numVolumeModifes)
> * Available volumes (similar to the file browser of the namenode web ui)
> * Available buckets (per volumes)
> * Available keys (per buckets)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12162) Update listStatus document to describe the behavior when the argument is a file

2017-07-31 Thread Ajay Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108081#comment-16108081
 ] 

Ajay Yadav commented on HDFS-12162:
---

Hi [~yzhangal], Are you referring to webhdfs documentation? Do you mean we 
should update webhdfs documentation to include an example for file path.
Thanks!!

> Update listStatus document to describe the behavior when the argument is a 
> file
> ---
>
> Key: HDFS-12162
> URL: https://issues.apache.org/jira/browse/HDFS-12162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, httpfs
>Reporter: Yongjun Zhang
>Assignee: Ajay Yadav
>
> The listStatus method can take in either directory path or file path as 
> input, however, currently both the javadoc and external document describe it 
> as only taking directory as input. This jira is to update the document about 
> the behavior when the argument is a file path.
> Thanks [~xiaochen] for the review and discussion in HDFS-12139, creating this 
> jira is the result of our discussion there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12162) Update listStatus document to describe the behavior when the argument is a file

2017-07-31 Thread Ajay Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Yadav reassigned HDFS-12162:
-

Assignee: Ajay Yadav

> Update listStatus document to describe the behavior when the argument is a 
> file
> ---
>
> Key: HDFS-12162
> URL: https://issues.apache.org/jira/browse/HDFS-12162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, httpfs
>Reporter: Yongjun Zhang
>Assignee: Ajay Yadav
>
> The listStatus method can take in either directory path or file path as 
> input, however, currently both the javadoc and external document describe it 
> as only taking directory as input. This jira is to update the document about 
> the behavior when the argument is a file path.
> Thanks [~xiaochen] for the review and discussion in HDFS-12139, creating this 
> jira is the result of our discussion there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11283) why should we not introduce distributed database to storage hdfs's metadata?

2017-07-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-11283.
--
Resolution: Fixed

Hi [~chenrongwei], the hdfs-dev mailing list is the right place for these 
questions. Resolving this.

> why should we not  introduce distributed database to storage hdfs's metadata?
> -
>
> Key: HDFS-11283
> URL: https://issues.apache.org/jira/browse/HDFS-11283
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: chenrongwei
>
> why should we not  introduce distributed database to storage hdfs's metadata?
> In my opinion,it maybe loss some performance,but it has below improvements:
> 1、enhance NN's extend ability,such as NN can support much more files and 
> blocks. The problem of massive little files always make me headache.
> 2、In most MR cluster aren't care the performance loss,but more care the 
> cluster's scale.
> 3、NN's HA implements maybe more simpler and reasonable.
> so I think maybe we should add a new work mode building on distributed 
> database for NN.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11764) NPE when the GroupMappingServiceProvider has no group

2017-07-31 Thread runlinzhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108054#comment-16108054
 ] 

runlinzhang commented on HDFS-11764:


Thank you for your attention,the version is  branch-2.7.2 

When the default implementation JniBasedUnixGroupsMapping 
GroupMappingServiceProvider,
And nodemanager does not configure the group for the user, which causes the 
group to be null, which makes null judgment to avoid this problem

> NPE when the GroupMappingServiceProvider has no group 
> --
>
> Key: HDFS-11764
> URL: https://issues.apache.org/jira/browse/HDFS-11764
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
>Reporter: runlinzhang
>Priority: Critical
> Fix For: 2.7.2
>
> Attachments: image.png
>
>
> The following code can throw NPE if GroupMappingServiceProvider.getGroups() 
> returns null.
> public List load(String user) throws Exception {
>   List groups = fetchGroupList(user);
>   if (groups.isEmpty()) {
> if (isNegativeCacheEnabled()) {
>   negativeCache.add(user);
> }
> // We throw here to prevent Cache from retaining an empty group
> throw noGroupsForUser(user);
>   }
>   return groups;
> }
> eg:image



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11580) Ozone: Support asynchronus client API for SCM and containers

2017-07-31 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108049#comment-16108049
 ] 

Chen Liang commented on HDFS-11580:
---

Failed tests are unrelated.

> Ozone: Support asynchronus client API for SCM and containers
> 
>
> Key: HDFS-11580
> URL: https://issues.apache.org/jira/browse/HDFS-11580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Yiqun Lin
> Attachments: HDFS-11580-HDFS-7240.001.patch, 
> HDFS-11580-HDFS-7240.002.patch, HDFS-11580-HDFS-7240.003.patch, 
> HDFS-11580-HDFS-7240.004.patch, HDFS-11580-HDFS-7240.005.patch, 
> HDFS-11580-HDFS-7240.006.patch, HDFS-11580-HDFS-7240.007.patch, 
> HDFS-11580-HDFS-7240.008.patch, HDFS-11580-HDFS-7240.009.patch, 
> HDFS-11580-HDFS-7240.010.patch, HDFS-11580-HDFS-7240.011.patch
>
>
> This is an umbrella JIRA that needs to support a set of APIs in Asynchronous 
> form.
> For containers -- or the datanode API currently supports a call 
> {{sendCommand}}. we need to build proper programming interface and support an 
> async interface.
> There is also a set of SCM API that clients can call, it would be nice to 
> support Async interface for those too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12232) ConfiguredFailoverProxyProvider caches DNS results indefinitely

2017-07-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12232:
---
Summary: ConfiguredFailoverProxyProvider caches DNS results indefinitely  
(was: ConfiguredFailoverProxyProvider caches DNS results infinitely)

> ConfiguredFailoverProxyProvider caches DNS results indefinitely
> ---
>
> Key: HDFS-12232
> URL: https://issues.apache.org/jira/browse/HDFS-12232
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, hdfs
>Reporter: Haohui Mai
>
> {{ConfiguredFailoverProxyProvider}} resolves the IP addresses and stores them 
> in the internal state. As a result when migrating the standby NN to a 
> different machine, it requires restarting all HDFS clients.
> A better approach is to follow Java's policy of caching DNS names: 
> http://javaeesupportpatterns.blogspot.com/2011/03/java-dns-cache-reference-guide.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12196) Ozone: DeleteKey-2: Implement container recycling service to delete stale blocks at background

2017-07-31 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108029#comment-16108029
 ] 

Chen Liang commented on HDFS-12196:
---

Thanks [~cheersyang] for the patch, looks pretty good to me overall! Some 
initial comments:

1. Although {{RecyclingTask}} has a {{getPriority()}} method, it seems that 
only the implementation class of {{AbstractRecyclingService}} will be enforcing 
the priority (currently, {{ContainerRecyclingService}}). So if more 
implementation of AbstractRecyclingService get added later, each of them will 
need to have this line
{code}
PriorityQueue tasks = new PriorityQueue<>(
(task1, task2) -> task1.getPriority() - task2.getPriority());
{code}
Otherwise seems the priorities will be completely ignored? Can we somehow add 
the priority enforcement to the abstractions also? e.g. somehow add a abstract 
priority queue class or something.

2. I was wondering will there be anyone that will be reading 
{{RecyclingResult}}? In other words, will there be cases that some logic will 
check the result of recycling tasks and take action accordingly? Seems unit 
test is the only one that does this in this patch. If not, then having this 
abstraction only to make unit test work seems a little bit of a overkill to 
me...This is also related to the result caching part.

Also I felt the term "recycling" here is not that informative, because it does 
not seem that we recycling anything, but can't think of a better alternative...

> Ozone: DeleteKey-2: Implement container recycling service to delete stale 
> blocks at background
> --
>
> Key: HDFS-12196
> URL: https://issues.apache.org/jira/browse/HDFS-12196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12196-HDFS-7240.001.patch
>
>
> Implement a recycling service running on datanode to delete stale blocks.  
> The recycling service scans staled blocks for each container and delete 
> chunks and references periodically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108020#comment-16108020
 ] 

Hadoop QA commented on HDFS-10899:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
32s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
45s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-hdfs-project: The patch generated 65 new 
+ 962 unchanged - 2 fixed = 1027 total (was 964) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879696/HDFS-10899.12.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 2fa0ba68abf2 3.13.0-117-generic #164-Ubuntu SMP Fri 

[jira] [Commented] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes

2017-07-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108012#comment-16108012
 ] 

Andrew Wang commented on HDFS-12151:


Thanks for the rev Sean, LGTM. Could you address the checkstyles and confirm 
the failed tests are flakes? +1 pending that.

> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> 
>
> Key: HDFS-12151
> URL: https://issues.apache.org/jira/browse/HDFS-12151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rolling upgrades
>Affects Versions: 3.0.0-alpha4
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HDFS-12151.001.patch, HDFS-12151.002.patch, 
> HDFS-12151.003.patch, HDFS-12151.004.patch, HDFS-12151.005.patch, 
> HDFS-12151.006.patch
>
>
> Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently 
> fails. On the client side it looks like this:
> {code}
> 17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in 
> createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code}
> But on the DataNode side there's an ArrayOutOfBoundsException because there 
> aren't any targetStorageIds:
> {code}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
> at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12034) Ozone: Web interface for KSM

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108004#comment-16108004
 ] 

Hadoop QA commented on HDFS-12034:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m  
6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
44s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  0s{color} | {color:orange} root: The patch generated 3 new + 0 unchanged - 
0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 38s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}122m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12034 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879693/HDFS-12034-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux ade669f5fbee 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 1f5353d |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HDFS-11848) Enhance dfsadmin listOpenFiles command to list files under a given path

2017-07-31 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108000#comment-16108000
 ] 

Wei-Chiu Chuang commented on HDFS-11848:


+1 for the purposal. Other than path filter, would we be interested in other 
types of filters? E.g. client name, machine?

> Enhance dfsadmin listOpenFiles command to list files under a given path
> ---
>
> Key: HDFS-11848
> URL: https://issues.apache.org/jira/browse/HDFS-11848
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Yiqun Lin
>
> HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list 
> all the open files in the system.
> One more thing that would be nice here is to filter the output on a passed 
> path or DataNode. Usecases: An admin might already know a stale file by path 
> (perhaps from fsck's -openforwrite), and wants to figure out who the lease 
> holder is. Proposal here is add suboptions to {{listOpenFiles}} to list files 
> filtered by path.
> {{LeaseManager#getINodeWithLeases(INodeDirectory)}} can be used to get the 
> open file list for any given ancestor directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11580) Ozone: Support asynchronus client API for SCM and containers

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16108002#comment-16108002
 ] 

Hadoop QA commented on HDFS-11580:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.ozone.web.client.TestKeys |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11580 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879691/HDFS-11580-HDFS-7240.011.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1eab87e93add 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 1f5353d |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20502/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-11826) Federation Namenode Heartbeat

2017-07-31 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107990#comment-16107990
 ] 

Chris Douglas commented on HDFS-11826:
--

+1 looks straightforward.

Only one minor suggestion: exit with an error if the feature is enabled, but 
none of the heartbeat services are valid (i.e., {{createHeartbeatServices}} 
returns an empty map).

> Federation Namenode Heartbeat
> -
>
> Key: HDFS-11826
> URL: https://issues.apache.org/jira/browse/HDFS-11826
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-11826-HDFS-10467-000.patch
>
>
> Add a service to the Router to check the state of a Namenode and report it 
> into the State Store.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11920) Ozone : add key partition

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107979#comment-16107979
 ] 

Hadoop QA commented on HDFS-11920:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 2 
unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11920 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879676/HDFS-11920-HDFS-7240.010.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 11276005d74d 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-12131) Add some of the FSNamesystem JMX values as metrics

2017-07-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107978#comment-16107978
 ] 

Andrew Wang commented on HDFS-12131:


Sure, let's do the deprecate+removal in separate JIRAs for clarity.

Some of the new metrics do not have unit test coverage. Could we add this?

> Add some of the FSNamesystem JMX values as metrics
> --
>
> Key: HDFS-12131
> URL: https://issues.apache.org/jira/browse/HDFS-12131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12131.000.patch, HDFS-12131.001.patch, 
> HDFS-12131.002.patch, HDFS-12131.002.patch, HDFS-12131.003.patch, 
> HDFS-12131.004.patch
>
>
> A number of useful numbers are emitted via the FSNamesystem JMX, but not 
> through the metrics system. These would be useful to be able to track over 
> time, e.g. to alert on via standard metrics systems or to view trends and 
> rate changes:
> * NumLiveDataNodes
> * NumDeadDataNodes
> * NumDecomLiveDataNodes
> * NumDecomDeadDataNodes
> * NumDecommissioningDataNodes
> * NumStaleStorages
> This is a simple change that just requires annotating the JMX methods with 
> {{@Metric}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11826) Federation Namenode Heartbeat

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107974#comment-16107974
 ] 

Hadoop QA commented on HDFS-11826:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
17s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-10467 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10467 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 11 new + 654 unchanged - 0 fixed = 665 total (was 654) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11826 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879683/HDFS-11826-HDFS-10467-000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9007297f11dc 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / fae1d1e |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20503/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20503/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20503/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20503/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20503/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   

[jira] [Commented] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107940#comment-16107940
 ] 

Hadoop QA commented on HDFS-12117:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 28 new + 409 unchanged - 0 fixed = 437 total (was 409) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
7s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12117 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879704/HDFS-12117.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2b866646a431 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3e23415 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20505/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20505/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20505/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3

[jira] [Updated] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-07-31 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12117:

Status: In Progress  (was: Patch Available)

> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Attachments: HDFS-12117.003.patch, HDFS-12117.004.patch, 
> HDFS-12117.005.patch, HDFS-12117.006.patch, HDFS-12117.patch.01, 
> HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-07-31 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12117:

Attachment: HDFS-12117.006.patch

Thanks for the suggestions [~jojochuang]! Attaching a new patch with the Client 
side tests added as suggested.

While implementing these, I noticed that *LocalFileSystem* currently has no 
support for snapshot related methods. I'm not sure if that's intended to be 
that way, or should we add support for this as well? 

*TestHttpFSFileSystemLocalFileSystem* would fail when trying to run snapshot 
related methods defined on *BaseTestHttpFSWith*, so for now, these tests would 
only really run if *FileSystem* implementation is different from 
*LocalFileSystem*.

For example, rename snapshot test:

{noformat}
  private void testRenameSnapshot() throws Exception {
if (!this.isLocalFS()) {
  Path snapshottablePath = new Path("/tmp/tmp-snap-test");
  createSnapshotTestsPreconditions(snapshottablePath);
  //Now get the FileSystem instance that's being tested
  FileSystem fs = this.getHttpFSFileSystem();
  fs.createSnapshot(snapshottablePath, "snap-to-rename");
  fs.renameSnapshot(snapshottablePath, "snap-to-rename",
  "snap-new-name");
  Path snapshotsDir = new Path("/tmp/tmp-snap-test/.snapshot");
  FileStatus[] snapshotItems = fs.listStatus(snapshotsDir);
  assertTrue("Should have exactly one snapshot.",
  snapshotItems.length == 1);
  String resultingSnapName = snapshotItems[0].getPath().getName();
  assertTrue("Snapshot name is not same as passed name.",
  "snap-new-name".equals(resultingSnapName));
  cleanSnapshotTests(snapshottablePath, resultingSnapName);
}
  }
{noformat}

> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Attachments: HDFS-12117.003.patch, HDFS-12117.004.patch, 
> HDFS-12117.005.patch, HDFS-12117.006.patch, HDFS-12117.patch.01, 
> HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-07-31 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-12117:

Status: Patch Available  (was: In Progress)

> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Attachments: HDFS-12117.003.patch, HDFS-12117.004.patch, 
> HDFS-12117.005.patch, HDFS-12117.006.patch, HDFS-12117.patch.01, 
> HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12134) libhdfs++: Add a synchronization interface for the GSSAPI

2017-07-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107875#comment-16107875
 ] 

Hadoop QA commented on HDFS-12134:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  9m 
42s{color} | {color:red} Docker failed to build yetus/hadoop:3117e2a. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12134 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879689/HDFS-12134.HDFS-8707.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20504/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Add a synchronization interface for the GSSAPI
> -
>
> Key: HDFS-12134
> URL: https://issues.apache.org/jira/browse/HDFS-12134
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12134.HDFS-8707.000.patch, 
> HDFS-12134.HDFS-8707.001.patch, HDFS-12134.HDFS-8707.002.patch
>
>
> Bits of the GSSAPI that Cyrus Sasl uses aren't thread safe.  There needs to 
> be a way for a client application to share a lock with this library in order 
> to prevent race conditions.  It can be done using event callbacks through the 
> C API but we can provide something more robust (RAII) in the C++ API.
> Proposed client supplied lock, pretty much the C++17 lockable concept. Use a 
> default if one isn't provided.  This would be scoped at the process level 
> since it's unlikely that multiple instances of libgssapi unless someone puts 
> some effort in with dlopen/dlsym.
> {code}
> class LockProvider
> {
>   virtual ~LockProvider() {}
>   // allow client application to deny access to the lock
>   virtual bool try_lock() = 0;
>   virtual void unlock() = 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-07-31 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107874#comment-16107874
 ] 

Lei (Eddy) Xu commented on HDFS-10285:
--

Hey, [~umamaheswararao]. Thanks for the great work!

I have some nits questions: 

* Non-recursively set xattr.  Please kindly re-consider to use recursive async 
call.  If the use cases are mostly targeted to the downstream projects like 
HBase and etc., the chance of these projects mistakenly call 
{{satisfyStoragePolicy}} on wrong directory (i.e., "/") is rare, but it will 
make the projects to manage large / deep namespace difficult, i.e., hbase needs 
to iterate the namespace itself and calls the same amount of "setXattr" anyway 
(because the # of files to move is the same).  Similar to "rm -rf /", while it 
is bad that "rm" allows to do it, but IMO it should not prevent users / 
applications to use "rm -rf" in a sensible way. 

* The newly added {{public void removeXattr(long id, String xattrName)}}. While 
its name seems very generic, it seems only allow taking sps xattr as legit 
parameter. Should we demote it from public API in {{Namesystem}}?

* Would it make sense to have an admin command to unset SPS on a path? For an 
user to undo his own mistake. 

* {{FSNamesystem#satisfyStoragePolicy}}. Is this only setting xattr? Can we do 
the setting xattr part without SPS running? I was thinking the scenarios that:  
some downstream projects (i.e., hbase) start to routinely use this API,  while 
for some reason (i.e., mover is running or cluster misconfiguration), SPS is 
not running, should we still allow these projects to successfully call the 
{{satisfyStoragePolicy()}}, and allow SPS to catch up later on?

* And since this call essentially triggers a large async background task, 
should we put some logs here? Similarly, it'd be nice to have related JMX stats 
and some indications in web UI, to be easier to integrate with other systems.




> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-SPS-TestReport-20170708.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12134) libhdfs++: Add a synchronization interface for the GSSAPI

2017-07-31 Thread Deepak Majeti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107854#comment-16107854
 ] 

Deepak Majeti commented on HDFS-12134:
--

+1 LGTM

> libhdfs++: Add a synchronization interface for the GSSAPI
> -
>
> Key: HDFS-12134
> URL: https://issues.apache.org/jira/browse/HDFS-12134
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12134.HDFS-8707.000.patch, 
> HDFS-12134.HDFS-8707.001.patch, HDFS-12134.HDFS-8707.002.patch
>
>
> Bits of the GSSAPI that Cyrus Sasl uses aren't thread safe.  There needs to 
> be a way for a client application to share a lock with this library in order 
> to prevent race conditions.  It can be done using event callbacks through the 
> C API but we can provide something more robust (RAII) in the C++ API.
> Proposed client supplied lock, pretty much the C++17 lockable concept. Use a 
> default if one isn't provided.  This would be scoped at the process level 
> since it's unlikely that multiple instances of libgssapi unless someone puts 
> some effort in with dlopen/dlsym.
> {code}
> class LockProvider
> {
>   virtual ~LockProvider() {}
>   // allow client application to deny access to the lock
>   virtual bool try_lock() = 0;
>   virtual void unlock() = 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12197) Do the HDFS dist stitching in hadoop-hdfs-project

2017-07-31 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107840#comment-16107840
 ] 

Elek, Marton commented on HDFS-12197:
-

It's not just about running the pseudo distributed cluster from dev tree. It's 
also impossible to run Namenode from IDE while the scope is provided for the 
selected dependencies. Would be great to fix this as well.

> Do the HDFS dist stitching in hadoop-hdfs-project
> -
>
> Key: HDFS-12197
> URL: https://issues.apache.org/jira/browse/HDFS-12197
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>
> Problem reported by [~lars_francke] on HDFS-11596. We can no longer easily 
> start a namenode and datanode from the source directory without doing a full 
> build per the wiki instructions: 
> https://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment
> This is because we don't have a top-level dist for HDFS. $HADOOP_YARN_HOME 
> for instance can be set to {{hadoop-yarn-project/target}}, but 
> $HADOOP_HDFS_HOME goes into the submodule: 
> {{hadoop-hdfs-project/hadoop-hdfs/target}}. This means it's missing the files 
> from the sibling hadoop-hdfs-client module (which is required by the 
> namenode), but also other siblings like nfs and httpfs.
> So, I think the right fix is doing the dist stitching at the 
> {{hadoop-hdfs-project}} level where we can aggregate all the child modules, 
> and pointing $HADOOP_HDFS_HOME at this directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12202) Provide new set of FileSystem API to bypass external attribute provider

2017-07-31 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107838#comment-16107838
 ] 

Chris Douglas commented on HDFS-12202:
--

This is a pretty narrow use case. As [~asuresh] points out, most would be 
covered by a server-side policy that either a) applies, but does not return 
augmented plugin state for particular users or b) filters out properties 
applied by the plugin. Implementing (b) only requires a modification to distcp. 
If Sentry/Ranger/etc. don't put their attributes in a namespace-like schema so 
(b) is difficult to implement, then a server-side policy is still preferable.

Extending {{FileSystem}} is a very hard sell, since it would also add this flag 
to the protocol. Not only would this approach not work for old clusters, it 
would silently return the unfiltered results. Moreover, every FileSystem other 
than HDFS wouldn't support this. Are there other use cases?

> Provide new set of FileSystem API to bypass external attribute provider
> ---
>
> Key: HDFS-12202
> URL: https://issues.apache.org/jira/browse/HDFS-12202
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>
> HDFS client uses 
> {code}
>   /**
>* Return a file status object that represents the path.
>* @param f The path we want information from
>* @return a FileStatus object
>* @throws FileNotFoundException when the path does not exist
>* @throws IOException see specific implementation
>*/
>   public abstract FileStatus getFileStatus(Path f) throws IOException;
>   /**
>* List the statuses of the files/directories in the given path if the path 
> is
>* a directory.
>* 
>* Does not guarantee to return the List of files/directories status in a
>* sorted order.
>* 
>* Will not return null. Expect IOException upon access error.
>* @param f given path
>* @return the statuses of the files/directories in the given patch
>* @throws FileNotFoundException when the path does not exist
>* @throws IOException see specific implementation
>*/
>   public abstract FileStatus[] listStatus(Path f) throws 
> FileNotFoundException,
>  IOException;
> {code}
> to get FileStatus of files.
> When external attribute provider (INodeAttributeProvider) is enabled for a 
> cluster, the  external attribute provider is consulted to get back some 
> relevant info (including ACL, group etc) and returned back in FileStatus, 
> There is a problem here, when we use distcp to copy files from srcCluster to 
> tgtCluster, if srcCluster has external attribute provider enabled, the data 
> we copied would contain data from attribute provider, which we may not want.
> Create this jira to add a new set of interface for distcp to use, so that 
> distcp can copy HDFS data only and bypass external attribute provider data.
> The new set API would look like
> {code}
>  /**
>* Return a file status object that represents the path.
>* @param f The path we want information from
>* @param bypassExtAttrProvider if true, bypass external attr provider
>*when it's in use.
>* @return a FileStatus object
>* @throws FileNotFoundException when the path does not exist
>* @throws IOException see specific implementation
>*/
>   public FileStatus getFileStatus(Path f,
>   final boolean bypassExtAttrProvider) throws IOException;
>   /**
>* List the statuses of the files/directories in the given path if the path 
> is
>* a directory.
>* 
>* Does not guarantee to return the List of files/directories status in a
>* sorted order.
>* 
>* Will not return null. Expect IOException upon access error.
>* @param f
>* @param bypassExtAttrProvider if true, bypass external attr provider
>*when it's in use.
>* @return
>* @throws FileNotFoundException
>* @throws IOException
>*/
>   public FileStatus[] listStatus(Path f,
>   final boolean bypassExtAttrProvider) throws FileNotFoundException,
>   IOException;
> {code}
> So when bypassExtAttrProvider is true, external attribute provider will be 
> bypassed.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12222) Add EC information to BlockLocation

2017-07-31 Thread Ajay Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107835#comment-16107835
 ] 

Ajay Yadav commented on HDFS-1:
---

Hi [~andrew.wang], For usages like FIF BlockLocation does it make more sense to 
expose parity blocks and data blocks via different functions in 
LocatedFileStatus? BlockLocation seems to be lower abstraction for 
FileInputFormat. 

{code} if (file instanceof LocatedFileStatus) {
  blkLocations = ((LocatedFileStatus) file).getDataBlockLocations();
} else {
  blkLocations = fs.getFileDataBlockLocations(file, 0, length);
}
{code}

> Add EC information to BlockLocation
> ---
>
> Key: HDFS-1
> URL: https://issues.apache.org/jira/browse/HDFS-1
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>  Labels: hdfs-ec-3.0-nice-to-have
>
> HDFS applications query block location information to compute splits. One 
> example of this is FileInputFormat:
> https://github.com/apache/hadoop/blob/d4015f8628dd973c7433639451a9acc3e741d2a2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java#L346
> You see bits of code like this that calculate offsets as follows:
> {noformat}
> long bytesInThisBlock = blkLocations[startIndex].getOffset() + 
>   blkLocations[startIndex].getLength() - offset;
> {noformat}
> EC confuses this since the block locations include parity block locations as 
> well, which are not part of the logical file length. This messes up the 
> offset calculation and thus topology/caching information too.
> Applications can figure out what's a parity block by reading the EC policy 
> and then parsing the schema, but it'd be a lot better if we exposed this more 
> generically in BlockLocation instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-07-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107830#comment-16107830
 ] 

Andrew Wang commented on HDFS-10285:


Hi Uma, thanks for the replies,

bq. One possible way for admins to notice the failures would be via metrics 
reporting. I am also thinking to provide option in fsck command to check the 
current pending/in-progress status. I understand, this kind of status tracking 
may be useful in the case of SSM kind of systems to act upon, say raising alarm 
alerts etc. But HBase kind of system may not take any action from its business 
logic even of movement statuses are failures. Right now, HDFS itself will keep 
retry until it satisfies.

For the automatic usecase, I agree that metrics are probably the best we can 
do. However, the API exposed here is for interactive usecases (e.g. a user 
calling the shell command and polling until it's done). I think we need to do 
more here to expose the status.

Even for the HBase usecase, it'd still want to know about satisfier status so 
it can bubble it up to an HBase admin.

bq. I agree, allowing recursively will make user much more easier when they 
need recursive execution. Only constraint we thought was to make operation 
light weight as much as possible.

Can this be addressed by throttling? I think the SPS operations aren't too 
different from decommissioning, since they're both doing block placement and 
tracking data movement, and the decom throttles work okay.

We've also encountered directories with millions of files before, so there's a 
need for throttles anyway. Maybe we can do something generic here that can be 
shared with HDFS-10899.

bq. The pain point with SPS recursive could be is that... it may take a while 
to finish all data movements under that directory. Mean while if user attempts 
to change some policies again under some subdirectory(say /a/b ) and wants to 
satisfy, then we can't block him because of previous large directory execution 
was in-progress. Each file will have its own priority. In the re-encryptionzone 
case, blocking may make sense as overall operation may finish in reasonable 
time. But in SPS, its a data movement definitely it will take a while depending 
on bandwidth, DN perf etc. Some times due to network glitches ops could fail 
and we are retrying for that operations.

Re-encryption will be faster than SPS, but it's not fast since it needs to talk 
to the KMS. Xiao's benchmarks indicate that a re-encrypt operation will likely 
run for hours. On the upside, the benchmarks also show that scanning through an 
already-re-encrypted zone is quite fast (seconds). I expect it'll be similarly 
fast for SPS if a user submits subdir or duplicate requests. Would be good to 
benchmark this.

I also don't understand the aversion to FIFO execution. It reduces code 
complexity and is easy for admins to reason about. If we want to do something 
more fancy, there should be a broader question around the API for resource 
management. Is it fair share, priorities, limits, some combination? What are 
these applied to (users, files, directories, queues with ACLs)?

bq. Here Even if older C-DN comes back, on re-registration, we send dropSPSWork 
request to DNs, that will prevent 2 C-DNs running.

What's the total SPS work timeout in minutes? The node is declared dead after 
10.5 minutes, but if the network partition is shorter than that, it won't need 
to re-register. 5 mins also seems kind of long for an IN_PROGRESS update, since 
it should take a few seconds for each block movement.

Also, we can't depend on re-registration with NN for fencing the old C-DN, 
since there could be a network partition that is just between the NN and old 
C-DN, and the old C-DN can still talk to other DNs. I don't know how this 
affects correctness, but having multiple C-DNs makes debugging harder.

bq. Actually NN will not track movement at block level. We are tracking at file 
level. NN will track only for Inode id to be satisfied fully. Also With above 
optimization, that is to avoid keeping Xattrs for each file. Overhead should be 
pretty less as overlap block scanning will happen sequentially.

Even assuming we do the xattr optimization, I believe the NN still has a queue 
of pending work items so they can be retried if the C-DNs fail. How many items 
might be in this queue, for a large SPS request? Is it throttled?

At a higher-level, if we implement all the throttles to reduce NN overhead, is 
there still a benefit to offloading work to DNs? The SPS workload isn't too 
different from decommissioning, which we manage on the NN okay.

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>

  1   2   >