[jira] [Commented] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2016-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297697#comment-15297697
 ] 

Hadoop QA commented on HDFS-10429:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804790/HDFS-10429.1.patch |
| JIRA Issue | HDFS-10429 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bdb61420f767 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b4078bd |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15535/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15535/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
>Priority: Minor
> Attachments: HDFS-10429.1.patch
>
>
> Every time I use 'hdfs dfs -put' upload 

[jira] [Updated] (HDFS-9365) Balaner does not work with the HDFS-6376 HA setup

2016-05-23 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-9365:

Hadoop Flags: Reviewed

> Balaner does not work with the HDFS-6376 HA setup
> -
>
> Key: HDFS-9365
> URL: https://issues.apache.org/jira/browse/HDFS-9365
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h9365_20151119.patch, h9365_20151120.patch, 
> h9365_20160523.patch
>
>
> HDFS-6376 added support for DistCp between two HA clusters.  After the 
> change, Balaner will use all the NN from both the local and the remote 
> clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9365) Balaner does not work with the HDFS-6376 HA setup

2016-05-23 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297676#comment-15297676
 ] 

Jing Zhao commented on HDFS-9365:
-

bq. testGetNNServiceRpcAddressesForNsIds is already changed in 
h9365_20151120.patch to test 2 name services. Is it the same test case you 
suggested?

Yes! Sorry I missed this. +1 on the latest patch.

> Balaner does not work with the HDFS-6376 HA setup
> -
>
> Key: HDFS-9365
> URL: https://issues.apache.org/jira/browse/HDFS-9365
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h9365_20151119.patch, h9365_20151120.patch, 
> h9365_20160523.patch
>
>
> HDFS-6376 added support for DistCp between two HA clusters.  After the 
> change, Balaner will use all the NN from both the local and the remote 
> clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2016-05-23 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297671#comment-15297671
 ] 

Jing Zhao commented on HDFS-10429:
--

Thanks for working on this, [~aplusplus]!

I can also reproduce the issue in my environment. When {{DataStreamer#close}} 
is called, {{streamerClosed}} is first set to false, which may trigger the 
DataStreamer thread to end its loop and invoke {{closeInternal}}. Then while 
the DataStreamer thread is still waiting for the ResponseProcessor to die 
({{response.join()}}), the interruption from {{DataStreamer#close(true)}} can 
interrupt the DataStreamer thread and generate the warning msg. 

For the fix, I think it's ok to change the log level to debug. But in the 
meanwhile, the current closing logic for DFSOutputStream/DataStreamer seems not 
clean. Based on the current implementation, {{DataStreamer#closeInternal}} is 
expected to be used to correctly close the ResponseProcessor. Thus maybe a 
simple fix can be to still call {{closeThreads(false)}} inside of "try" section 
of {{closeImpl}}, and only when the normal close op has not been triggered or 
fails, we call {{closeThreads(true)}} again in the "finally" section.

> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
>Priority: Minor
> Attachments: HDFS-10429.1.patch
>
>
> Every time I use 'hdfs dfs -put' upload file, this warning is printed:
> {code:java}
> 16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Thread.join(Thread.java:1245)
>   at java.lang.Thread.join(Thread.java:1319)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
>   at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
> {code}
> The reason is this: originally, DataStreamer::closeResponder always prints a 
> warning about InterruptedException; since HDFS-9812, 
> DFSOutputStream::closeImpl  always forces threads to close, which causes 
> InterruptedException.
> A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8057) Move BlockReader implementation to the client implementation package

2016-05-23 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-8057:
---
Attachment: HDFS-8057.branch-2.002.patch

> Move BlockReader implementation to the client implementation package
> 
>
> Key: HDFS-8057
> URL: https://issues.apache.org/jira/browse/HDFS-8057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-8057.1.patch, HDFS-8057.2.patch, HDFS-8057.3.patch, 
> HDFS-8057.4.patch, HDFS-8057.branch-2.001.patch, 
> HDFS-8057.branch-2.002.patch, HDFS-8057.branch-2.5.patch
>
>
> BlockReaderLocal, RemoteBlockReader, etc should be moved to 
> org.apache.hadoop.hdfs.client.impl.  We may as well rename RemoteBlockReader 
> to BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8057) Move BlockReader implementation to the client implementation package

2016-05-23 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-8057:
---
Attachment: (was: HDFS-8057.branch-2.002.patch)

> Move BlockReader implementation to the client implementation package
> 
>
> Key: HDFS-8057
> URL: https://issues.apache.org/jira/browse/HDFS-8057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-8057.1.patch, HDFS-8057.2.patch, HDFS-8057.3.patch, 
> HDFS-8057.4.patch, HDFS-8057.branch-2.001.patch, HDFS-8057.branch-2.5.patch
>
>
> BlockReaderLocal, RemoteBlockReader, etc should be moved to 
> org.apache.hadoop.hdfs.client.impl.  We may as well rename RemoteBlockReader 
> to BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9365) Balaner does not work with the HDFS-6376 HA setup

2016-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297657#comment-15297657
 ] 

Hadoop QA commented on HDFS-9365:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 2 new + 
513 unchanged - 1 fixed = 515 total (was 514) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 28s 
{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
39s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 33s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805804/h9365_20160523.patch |
| JIRA Issue | HDFS-9365 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e9f537d2211f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b4078bd |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15533/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15533/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15533/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Balaner does not work with the HDFS-6376 HA setup
> -
>
> Key: HDFS-9365
> URL: https://issues.apache.org/jira/browse/HDFS-9365
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: 

[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297592#comment-15297592
 ] 

Hadoop QA commented on HDFS-10301:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 25 new + 
371 unchanged - 7 fixed = 396 total (was 378) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 26s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 11s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805798/HDFS-10301.004.patch |
| JIRA Issue | HDFS-10301 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d79544c47ce6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b4078bd |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15531/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15531/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15531/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15531/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15531/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> BlockReport retransmissions may 

[jira] [Updated] (HDFS-9365) Balaner does not work with the HDFS-6376 HA setup

2016-05-23 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-9365:
--
Attachment: h9365_20160523.patch

h9365_20160523.patch: sync'ed with trunk.

> Balaner does not work with the HDFS-6376 HA setup
> -
>
> Key: HDFS-9365
> URL: https://issues.apache.org/jira/browse/HDFS-9365
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h9365_20151119.patch, h9365_20151120.patch, 
> h9365_20160523.patch
>
>
> HDFS-6376 added support for DistCp between two HA clusters.  After the 
> change, Balaner will use all the NN from both the local and the remote 
> clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9365) Balaner does not work with the HDFS-6376 HA setup

2016-05-23 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297577#comment-15297577
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9365:
---

> ... Also maybe we can add a new unit test in TestDFSUtil for the scenario 
> where more than 1 name services are passed to getNameServiceUris.

testGetNNServiceRpcAddressesForNsIds is already changed in h9365_20151120.patch 
to test 2 name services.  Is it the same test case you suggested?

> Balaner does not work with the HDFS-6376 HA setup
> -
>
> Key: HDFS-9365
> URL: https://issues.apache.org/jira/browse/HDFS-9365
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h9365_20151119.patch, h9365_20151120.patch
>
>
> HDFS-6376 added support for DistCp between two HA clusters.  After the 
> change, Balaner will use all the NN from both the local and the remote 
> clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-05-23 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297563#comment-15297563
 ] 

Vinitha Reddy Gankidi commented on HDFS-10301:
--

I uploaded the patch HDFS-10301.004.patch. I have implemented the idea that 
Konstantin suggested, i.e, DNs explicitly report storages that they have. This 
eliminates NN guessing which storage is the last in the block report RPC. In 
the case of FBR, NameNodeRPCServer can retrieve the list of storages from the 
storage block report array. In the case that block reports are split, DNs send 
an additional StorageReportOnly RPC after sending the block reports for each 
individual storage. This StorageReportOnly RPC is sent as a FBR. This rpc 
contains all the storages that the DN has with -1 number of blocks. A new enum 
STORAGE_REPORT_ONLY is introduced in BlockListsAsLong for this purpose.

Zombie storage removal is triggered from the NameNodeRPCServer instead of the 
BlockManager since the RPCServer now has all the information required to 
construct the list of storages that the DN is reporting. After processing the 
block reports as usual, zombie storages are removed by comparing the list of 
storages in the block report and the list of storages that the NN is aware of 
for that DN.



> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.01.patch, HDFS-10301.sample.patch, 
> zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8057) Move BlockReader implementation to the client implementation package

2016-05-23 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-8057:
---
Status: Patch Available  (was: Open)

> Move BlockReader implementation to the client implementation package
> 
>
> Key: HDFS-8057
> URL: https://issues.apache.org/jira/browse/HDFS-8057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-8057.1.patch, HDFS-8057.2.patch, HDFS-8057.3.patch, 
> HDFS-8057.4.patch, HDFS-8057.branch-2.001.patch, 
> HDFS-8057.branch-2.002.patch, HDFS-8057.branch-2.5.patch
>
>
> BlockReaderLocal, RemoteBlockReader, etc should be moved to 
> org.apache.hadoop.hdfs.client.impl.  We may as well rename RemoteBlockReader 
> to BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8057) Move BlockReader implementation to the client implementation package

2016-05-23 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-8057:
---
Status: Open  (was: Patch Available)

> Move BlockReader implementation to the client implementation package
> 
>
> Key: HDFS-8057
> URL: https://issues.apache.org/jira/browse/HDFS-8057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-8057.1.patch, HDFS-8057.2.patch, HDFS-8057.3.patch, 
> HDFS-8057.4.patch, HDFS-8057.branch-2.001.patch, 
> HDFS-8057.branch-2.002.patch, HDFS-8057.branch-2.5.patch
>
>
> BlockReaderLocal, RemoteBlockReader, etc should be moved to 
> org.apache.hadoop.hdfs.client.impl.  We may as well rename RemoteBlockReader 
> to BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8057) Move BlockReader implementation to the client implementation package

2016-05-23 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-8057:
---
Attachment: HDFS-8057.branch-2.002.patch

Oh, I didn't realize it. Thank you for your review, Nicholas! I updated the 
patch.

> Move BlockReader implementation to the client implementation package
> 
>
> Key: HDFS-8057
> URL: https://issues.apache.org/jira/browse/HDFS-8057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-8057.1.patch, HDFS-8057.2.patch, HDFS-8057.3.patch, 
> HDFS-8057.4.patch, HDFS-8057.branch-2.001.patch, 
> HDFS-8057.branch-2.002.patch, HDFS-8057.branch-2.5.patch
>
>
> BlockReaderLocal, RemoteBlockReader, etc should be moved to 
> org.apache.hadoop.hdfs.client.impl.  We may as well rename RemoteBlockReader 
> to BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8057) Move BlockReader implementation to the client implementation package

2016-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297533#comment-15297533
 ] 

Hadoop QA commented on HDFS-8057:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 3m 41s 
{color} | {color:red} Docker failed to build yetus/hadoop:babe025. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805796/HDFS-8057.branch-2.002.patch
 |
| JIRA Issue | HDFS-8057 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15532/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Move BlockReader implementation to the client implementation package
> 
>
> Key: HDFS-8057
> URL: https://issues.apache.org/jira/browse/HDFS-8057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-8057.1.patch, HDFS-8057.2.patch, HDFS-8057.3.patch, 
> HDFS-8057.4.patch, HDFS-8057.branch-2.001.patch, 
> HDFS-8057.branch-2.002.patch, HDFS-8057.branch-2.5.patch
>
>
> BlockReaderLocal, RemoteBlockReader, etc should be moved to 
> org.apache.hadoop.hdfs.client.impl.  We may as well rename RemoteBlockReader 
> to BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-05-23 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297526#comment-15297526
 ] 

Vinitha Reddy Gankidi commented on HDFS-10301:
--

Assigning the ticket to myself so that I can upload a patch. Please review.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Colin Patrick McCabe
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.01.patch, HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-05-23 Thread Vinitha Reddy Gankidi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinitha Reddy Gankidi updated HDFS-10301:
-
Attachment: HDFS-10301.004.patch

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.01.patch, HDFS-10301.sample.patch, 
> zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-05-23 Thread Vinitha Reddy Gankidi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinitha Reddy Gankidi reassigned HDFS-10301:


Assignee: Vinitha Reddy Gankidi  (was: Colin Patrick McCabe)

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.01.patch, HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10448) CacheManager#checkLimit always assumes a replication factor of 1

2016-05-23 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297506#comment-15297506
 ] 

Yiqun Lin commented on HDFS-10448:
--

So, [~cmccabe], what do you think the patch for this?

> CacheManager#checkLimit  always assumes a replication factor of 1
> -
>
> Key: HDFS-10448
> URL: https://issues.apache.org/jira/browse/HDFS-10448
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it 
> does with these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just 
> one replication's vaue.
> {code}
> return new CacheDirectiveStats.Builder()
> .setBytesNeeded(requestedBytes)
> .setFilesCached(requestedFiles)
> .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit 
> size because the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > 
> pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print 
> warning info. It divided by replication here, while the 
> {{stats.getBytesNeeded()}} was just one replication value.
> {code}
>   throw new InvalidRequestException("Caching path " + path + " of size "
>   + stats.getBytesNeeded() / replication + " bytes at replication "
>   + replication + " would exceed pool " + pool.getPoolName()
>   + "'s remaining capacity of "
>   + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9365) Balaner does not work with the HDFS-6376 HA setup

2016-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297412#comment-15297412
 ] 

Hadoop QA commented on HDFS-9365:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HDFS-9365 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12773635/h9365_20151120.patch |
| JIRA Issue | HDFS-9365 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15529/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Balaner does not work with the HDFS-6376 HA setup
> -
>
> Key: HDFS-9365
> URL: https://issues.apache.org/jira/browse/HDFS-9365
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h9365_20151119.patch, h9365_20151120.patch
>
>
> HDFS-6376 added support for DistCp between two HA clusters.  After the 
> change, Balaner will use all the NN from both the local and the remote 
> clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-23 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297411#comment-15297411
 ] 

Xiaobing Zhou commented on HDFS-10390:
--

v011 fixed some check style issues in TestAsyncDFS.

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch, HDFS-10390-HDFS-9924.004.patch, 
> HDFS-10390-HDFS-9924.005.patch, HDFS-10390-HDFS-9924.006.patch, 
> HDFS-10390-HDFS-9924.007.patch, HDFS-10390-HDFS-9924.008.patch, 
> HDFS-10390-HDFS-9924.009.patch, HDFS-10390-HDFS-9924.010.patch, 
> HDFS-10390-HDFS-9924.011.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-23 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10390:
-
Attachment: HDFS-10390-HDFS-9924.011.patch

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch, HDFS-10390-HDFS-9924.004.patch, 
> HDFS-10390-HDFS-9924.005.patch, HDFS-10390-HDFS-9924.006.patch, 
> HDFS-10390-HDFS-9924.007.patch, HDFS-10390-HDFS-9924.008.patch, 
> HDFS-10390-HDFS-9924.009.patch, HDFS-10390-HDFS-9924.010.patch, 
> HDFS-10390-HDFS-9924.011.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7597) DNs should not open new NN connections when webhdfs clients seek

2016-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297403#comment-15297403
 ] 

Hadoop QA commented on HDFS-7597:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 32s 
{color} | {color:red} hadoop-hdfs-project: patch generated 2 new + 55 unchanged 
- 0 fixed = 57 total (was 55) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 12s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 31s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.web.webhdfs.TestDataNodeUGIProvider |
|   | hadoop.hdfs.server.common.TestJspHelper |
|   | hadoop.hdfs.server.balancer.TestBalancer |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12748885/HDFS-7597.patch |
| JIRA Issue | HDFS-7597 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux df6da4756786 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6d043aa |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15527/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 

[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297394#comment-15297394
 ] 

Hadoop QA commented on HDFS-10390:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-hdfs-project: patch generated 2 new + 358 
unchanged - 0 fixed = 360 total (was 358) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 46s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestAsyncDFSRename |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.mover.TestStorageMover |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805758/HDFS-10390-HDFS-9924.010.patch
 |
| JIRA Issue | HDFS-10390 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 836f8e241925 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6d043aa |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15526/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 

[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297380#comment-15297380
 ] 

Hadoop QA commented on HDFS-9782:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 8s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 31s 
{color} | {color:red} root: patch generated 1 new + 12 unchanged - 6 fixed = 13 
total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 41s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 43s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
35s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 122m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.TestAsyncDFSRename |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805753/HDFS-9782.009.patch |
| JIRA Issue | HDFS-9782 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b2675931efd0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6d043aa |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15525/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15525/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15525/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test 

[jira] [Commented] (HDFS-9365) Balaner does not work with the HDFS-6376 HA setup

2016-05-23 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297381#comment-15297381
 ] 

Jing Zhao commented on HDFS-9365:
-

The patch looks good to me. It needs a minor rebase. Also maybe we can add a 
new unit test in TestDFSUtil for the scenario where more than 1 name services 
are passed to {{getNameServiceUris}}.

+1 after addressing the comment.

> Balaner does not work with the HDFS-6376 HA setup
> -
>
> Key: HDFS-9365
> URL: https://issues.apache.org/jira/browse/HDFS-9365
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h9365_20151119.patch, h9365_20151120.patch
>
>
> HDFS-6376 added support for DistCp between two HA clusters.  After the 
> change, Balaner will use all the NN from both the local and the remote 
> clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7766) Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect

2016-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297364#comment-15297364
 ] 

Hudson commented on HDFS-7766:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9845 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9845/])
HDFS-7766. Add a flag to WebHDFS op=CREATE to not respond with a 307 (aw: rev 
4b0f55b6ea1665e2118fd573f72a6fcd1fce20d6)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/NoRedirectParam.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ParameterParser.java


> Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect
> --
>
> Key: HDFS-7766
> URL: https://issues.apache.org/jira/browse/HDFS-7766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-7766.01.patch, HDFS-7766.02.patch, 
> HDFS-7766.03.patch, HDFS-7766.04.patch, HDFS-7766.04.patch, 
> HDFS-7766.05.patch, HDFS-7766.06.patch
>
>
> Please see 
> https://issues.apache.org/jira/browse/HDFS-7588?focusedCommentId=14276192=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14276192
> A backwards compatible manner we can fix this is to add a flag on the request 
> which would disable the redirect, i.e.
> {noformat}
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=CREATE[=]
> {noformat}
> returns 200 with the DN location in the response.
> This would allow the Browser clients to get the redirect URL to put the file 
> to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-1312) Re-balance disks within a Datanode

2016-05-23 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-1312:

Comment: was deleted

(was: [~arpitagarwal] Thanks for the review. I have committing to the feature 
branch.)

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Attachments: Architecture_and_testplan.pdf, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-1312) Re-balance disks within a Datanode

2016-05-23 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reopened HDFS-1312:


> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Attachments: Architecture_and_testplan.pdf, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10403) DiskBalancer: Add cancel command

2016-05-23 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10403:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~arpitagarwal] Thanks for the review. I have committed this to the feature 
branch.

> DiskBalancer: Add cancel  command
> -
>
> Key: HDFS-10403
> URL: https://issues.apache.org/jira/browse/HDFS-10403
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10403-HDFS-1312.001.patch, 
> HDFS-10403-HDFS-1312.002.patch
>
>
> Allows user to cancel an on-going disk balancing operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-1312) Re-balance disks within a Datanode

2016-05-23 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-1312.

  Resolution: Fixed
Hadoop Flags: Reviewed

[~arpitagarwal] Thanks for the review. I have committing to the feature branch.

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Attachments: Architecture_and_testplan.pdf, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7766) Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect

2016-05-23 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-7766:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

+1 committed to trunk

> Add a flag to WebHDFS op=CREATE to not respond with a 307 redirect
> --
>
> Key: HDFS-7766
> URL: https://issues.apache.org/jira/browse/HDFS-7766
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-7766.01.patch, HDFS-7766.02.patch, 
> HDFS-7766.03.patch, HDFS-7766.04.patch, HDFS-7766.04.patch, 
> HDFS-7766.05.patch, HDFS-7766.06.patch
>
>
> Please see 
> https://issues.apache.org/jira/browse/HDFS-7588?focusedCommentId=14276192=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14276192
> A backwards compatible manner we can fix this is to add a flag on the request 
> which would disable the redirect, i.e.
> {noformat}
> curl -i -X PUT 
> "http://:/webhdfs/v1/?op=CREATE[=]
> {noformat}
> returns 200 with the DN location in the response.
> This would allow the Browser clients to get the redirect URL to put the file 
> to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3135) Build a war file for HttpFS instead of packaging the server (tomcat) along with the application.

2016-05-23 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297278#comment-15297278
 ] 

Allen Wittenauer commented on HDFS-3135:


FWIW, these (httpfs and kms) are already built as war's so now it's just a 
matter of:

a) switching to tomcat-embedded (or something else?)
b) building the necessary glue to use the java command line to launch

> Build a war file for HttpFS instead of packaging the server (tomcat) along 
> with the application.
> 
>
> Key: HDFS-3135
> URL: https://issues.apache.org/jira/browse/HDFS-3135
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.2
>Reporter: Ravi Prakash
>  Labels: build
>
> There are several reason why web applications should not be packaged along 
> with the server that is expected to serve them. For one not all organisations 
> use vanilla tomcat. There are other reasons I won't go into.
> I'm filing this bug because some of our builds failed in trying to download 
> the tomcat.tar.gz file. We then had to manually wget the file and place it in 
> downloads/ to make the build pass. I suspect the download failed because of 
> an overloaded server (Frankly, I don't really know). If someone has ideas, 
> please share them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7597) DNs should not open new NN connections when webhdfs clients seek

2016-05-23 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297236#comment-15297236
 ] 

Yongjun Zhang commented on HDFS-7597:
-

HI [~daryn], [~cnauroth] and [~xiaobingo],

Thanks a lot for your work here.

Per my read of earlier discussion, it makes sense to me to replace the secure 
part fix of HDFS-8855 with the patch here (which is more general). However, I 
agree with Chris that we can commit the patch here and defer any further 
clean-up to a separate issue. I just kicked off a jenkins build to see if tests 
are clean.

Thanks.

 

> DNs should not open new NN connections when webhdfs clients seek
> 
>
> Key: HDFS-7597
> URL: https://issues.apache.org/jira/browse/HDFS-7597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7597.patch, HDFS-7597.patch, HDFS-7597.patch
>
>
> Webhdfs seeks involve closing the current connection, and reissuing a new 
> open request with the new offset.  The RPC layer caches connections so the DN 
> keeps a lingering connection open to the NN.  Connection caching is in part 
> based on UGI.  Although the client used the same token for the new offset 
> request, the UGI is different which forces the DN to open another unnecessary 
> connection to the NN.
> A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10452) SASL negotation should support buffer size negotiation

2016-05-23 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-10452:
--

 Summary: SASL negotation should support buffer size negotiation
 Key: HDFS-10452
 URL: https://issues.apache.org/jira/browse/HDFS-10452
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: encryption
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


The SASL negotation for data transfer encryption implemented in Hadoop 
currently only supports negotiation of cipher and QoP. The buffer size is not 
negotiated by SASL.

{code:title=SaslOutputStream.java}
public SaslOutputStream(OutputStream outStream, SaslClient saslClient) {
this.saslServer = null;
this.saslClient = saslClient;
String qop = (String) saslClient.getNegotiatedProperty(Sasl.QOP);
this.useWrap = qop != null && !"auth".equalsIgnoreCase(qop);
if (useWrap) {
  this.outStream = new BufferedOutputStream(outStream, 64*1024);
} else {
  this.outStream = outStream;
}
  }
{code}

{code:title=DataTransferSaslUtil.java}
public static Map createSaslPropertiesForEncryption(
  String encryptionAlgorithm) {
Map saslProps = Maps.newHashMapWithExpectedSize(3);
saslProps.put(Sasl.QOP, QualityOfProtection.PRIVACY.getSaslQop());
saslProps.put(Sasl.SERVER_AUTH, "true");
saslProps.put("com.sun.security.sasl.digest.cipher", encryptionAlgorithm);
return saslProps;
  }
{code}

For applications that are sensitive to buffer size, e.g., HBase, there should 
be a way to configure the buffer size.

In addition, the SASL negotiation for RPC does use the negotiated buffer size, 
but since Hadoop never actually negotiates it, the size is the default value, 
64 KB.

{code:title=SaslRpcClient.java}
public OutputStream getOutputStream(OutputStream out) throws IOException {
if (useWrap()) {
  // the client and server negotiate a maximum buffer size that can be
  // wrapped
  String maxBuf = 
(String)saslClient.getNegotiatedProperty(Sasl.RAW_SEND_SIZE);
  out = new BufferedOutputStream(new WrappedOutputStream(out),
 Integer.parseInt(maxBuf));
}
return out;
  }
{code}

We should make it possible to negotiate the buffer size for both data transfer 
and RPC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10370) Allow DataNode to be started with numactl

2016-05-23 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297215#comment-15297215
 ] 

John Zhuge commented on HDFS-10370:
---

Could you elaborate a bit more the use cases?

If we are moving into the territory of numa awareness, shall we consider a 
solution more generic than just Datanode? As all daemons can be made numa aware.

Do we plan to support membind or cpubind? How to assign daemons to different 
numa nodes? How to deal with imbalance in usage? How to monitor numa node stats 
(numastat(8) output)?

How to support this feature across platforms?

> Allow DataNode to be started with numactl
> -
>
> Key: HDFS-10370
> URL: https://issues.apache.org/jira/browse/HDFS-10370
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Dave Marion
>Assignee: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, 
> HDFS-10370-3.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8057) Move BlockReader implementation to the client implementation package

2016-05-23 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297170#comment-15297170
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8057:
---

Thanks for checking them.  

Compared with the trunk patch, the branch-2 patch should also change the uses 
of "RemoteBlockReader" to "BlockReaderRemote" in the comments/messages in 
BlockReaderFactory, BlockReaderRemote.

> Move BlockReader implementation to the client implementation package
> 
>
> Key: HDFS-8057
> URL: https://issues.apache.org/jira/browse/HDFS-8057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-8057.1.patch, HDFS-8057.2.patch, HDFS-8057.3.patch, 
> HDFS-8057.4.patch, HDFS-8057.branch-2.001.patch, HDFS-8057.branch-2.5.patch
>
>
> BlockReaderLocal, RemoteBlockReader, etc should be moved to 
> org.apache.hadoop.hdfs.client.impl.  We may as well rename RemoteBlockReader 
> to BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-23 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297163#comment-15297163
 ] 

Xiaobing Zhou commented on HDFS-10390:
--

v010 patch is posted.
1. used shared instance of MiniDFSCluster in TestAsyncDFS to accelerate tests.
2. adjusted indentation in ClientNamenodeProtocolTranslatorPB#getAclStatus.
3. added timeout setting for TestAsyncDFS

Thank you for the new comments.

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch, HDFS-10390-HDFS-9924.004.patch, 
> HDFS-10390-HDFS-9924.005.patch, HDFS-10390-HDFS-9924.006.patch, 
> HDFS-10390-HDFS-9924.007.patch, HDFS-10390-HDFS-9924.008.patch, 
> HDFS-10390-HDFS-9924.009.patch, HDFS-10390-HDFS-9924.010.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-23 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10390:
-
Attachment: HDFS-10390-HDFS-9924.010.patch

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch, HDFS-10390-HDFS-9924.004.patch, 
> HDFS-10390-HDFS-9924.005.patch, HDFS-10390-HDFS-9924.006.patch, 
> HDFS-10390-HDFS-9924.007.patch, HDFS-10390-HDFS-9924.008.patch, 
> HDFS-10390-HDFS-9924.009.patch, HDFS-10390-HDFS-9924.010.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7597) DNs should not open new NN connections when webhdfs clients seek

2016-05-23 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297148#comment-15297148
 ] 

Xiao Chen commented on HDFS-7597:
-

Thanks all for the contribution and discussion.

[~cnauroth], do you think we can move forward to commit this patch? Please let 
me know if there's anything I can help with. Thanks!

> DNs should not open new NN connections when webhdfs clients seek
> 
>
> Key: HDFS-7597
> URL: https://issues.apache.org/jira/browse/HDFS-7597
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7597.patch, HDFS-7597.patch, HDFS-7597.patch
>
>
> Webhdfs seeks involve closing the current connection, and reissuing a new 
> open request with the new offset.  The RPC layer caches connections so the DN 
> keeps a lingering connection open to the NN.  Connection caching is in part 
> based on UGI.  Although the client used the same token for the new offset 
> request, the UGI is different which forces the DN to open another unnecessary 
> connection to the NN.
> A job that performs many seeks will easily crash the NN due to fd exhaustion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-23 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297129#comment-15297129
 ] 

Tsz Wo Nicholas Sze commented on HDFS-10390:


The patch looks good.  Please make the tests run in a shorter time.  We should 
reuse the MiniDFSCluster.

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch, HDFS-10390-HDFS-9924.004.patch, 
> HDFS-10390-HDFS-9924.005.patch, HDFS-10390-HDFS-9924.006.patch, 
> HDFS-10390-HDFS-9924.007.patch, HDFS-10390-HDFS-9924.008.patch, 
> HDFS-10390-HDFS-9924.009.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-23 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-9782:
---
Status: Patch Available  (was: Open)

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch, HDFS-9782.008.patch, 
> HDFS-9782.009.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-23 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-9782:
---
Attachment: HDFS-9782.009.patch

This should address the checkstyle issues.

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch, HDFS-9782.008.patch, 
> HDFS-9782.009.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10370) Allow DataNode to be started with numactl

2016-05-23 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297074#comment-15297074
 ] 

John Zhuge commented on HDFS-10370:
---

Thanks [~dlmarion] for working on the issue. Could you please update the patch? 
It does not apply on the latest trunk. Could you please also upload a patch for 
{{branch-2}}?

For patch naming convention, please read 
https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch.


> Allow DataNode to be started with numactl
> -
>
> Key: HDFS-10370
> URL: https://issues.apache.org/jira/browse/HDFS-10370
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Dave Marion
>Assignee: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, 
> HDFS-10370-3.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10448) CacheManager#checkLimit always assumes a replication factor of 1

2016-05-23 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-10448:

Summary: CacheManager#checkLimit  always assumes a replication factor of 1  
(was: CacheManager#checkLimit  not correctly)

> CacheManager#checkLimit  always assumes a replication factor of 1
> -
>
> Key: HDFS-10448
> URL: https://issues.apache.org/jira/browse/HDFS-10448
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it 
> does with these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just 
> one replication's vaue.
> {code}
> return new CacheDirectiveStats.Builder()
> .setBytesNeeded(requestedBytes)
> .setFilesCached(requestedFiles)
> .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit 
> size because the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > 
> pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print 
> warning info. It divided by replication here, while the 
> {{stats.getBytesNeeded()}} was just one replication value.
> {code}
>   throw new InvalidRequestException("Caching path " + path + " of size "
>   + stats.getBytesNeeded() / replication + " bytes at replication "
>   + replication + " would exceed pool " + pool.getPoolName()
>   + "'s remaining capacity of "
>   + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10448) CacheManager#checkLimit not correctly

2016-05-23 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297068#comment-15297068
 ] 

Colin Patrick McCabe commented on HDFS-10448:
-

This is a good find.  I think that {{computeNeeded}} should take replication 
into account-- the fact that it doesn't currently is a bug.  Then there would 
be no need to change the callers of {{computeNeeded}}.

> CacheManager#checkLimit  not correctly
> --
>
> Key: HDFS-10448
> URL: https://issues.apache.org/jira/browse/HDFS-10448
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10448.001.patch
>
>
> The logic in {{CacheManager#checkLimit}} is not correct. In this method, it 
> does with these three logic:
> First, it will compute needed bytes for the specific path.
> {code}
> CacheDirectiveStats stats = computeNeeded(path, replication);
> {code}
> But the param {{replication}} is not used here. And the bytesNeeded is just 
> one replication's vaue.
> {code}
> return new CacheDirectiveStats.Builder()
> .setBytesNeeded(requestedBytes)
> .setFilesCached(requestedFiles)
> .build();
> {code}
> Second, then it should be multiply by the replication to compare the limit 
> size because the method {{computeNeeded}} was not used replication.
> {code}
> pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > 
> pool.getLimit()
> {code}
> Third, if we find the size was more than the limit value and then print 
> warning info. It divided by replication here, while the 
> {{stats.getBytesNeeded()}} was just one replication value.
> {code}
>   throw new InvalidRequestException("Caching path " + path + " of size "
>   + stats.getBytesNeeded() / replication + " bytes at replication "
>   + replication + " would exceed pool " + pool.getPoolName()
>   + "'s remaining capacity of "
>   + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8872) Reporting of missing blocks is different in fsck and namenode ui/metasave

2016-05-23 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297054#comment-15297054
 ] 

Rushabh S Shah commented on HDFS-8872:
--

bq. Actually after HDFS-7933, fsck includes decommissioning nodes and won't 
mark it as missing anymore.
It includes the decommissioned nodes also.
See the code below.
{code:title=NamenodeFsck.java|borderStyle=solid}
  int totalReplicas = liveReplicas + decommissionedReplicas +
  decommissioningReplicas;
..
..
 if (totalReplicas == 0) {
report.append(" MISSING!");
res.addMissing(block.toString(), block.getNumBytes());
missing++;
missize += block.getNumBytes();
 }
{code}

> Reporting of missing blocks is different in fsck and namenode ui/metasave
> -
>
> Key: HDFS-8872
> URL: https://issues.apache.org/jira/browse/HDFS-8872
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>
> Namenode ui and metasave will not report a block as missing if the only 
> replica is on decommissioning/decomissioned node while fsck will show it as 
> MISSING.
> Since decommissioned node can be formatted/removed anytime, we can actually 
> lose the block.
> Its better to alert on namenode ui if the only copy is on 
> decomissioned/decommissioning node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8872) Reporting of missing blocks is different in fsck and namenode ui/metasave

2016-05-23 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296892#comment-15296892
 ] 

Rushabh S Shah commented on HDFS-8872:
--

Thanks [~mingma] for mentioning HDFS-7933.
bq. For this scenario, it is debatable if the block should be marked as 
missing, it isn't uncommon for admins to decommission multiple nodes across 
racks, which means all 3 replica nodes will be in decommissioning state.
I agree the block should not be marked as missing if all the replica are on 
DecommissionING nodes.
But we *should* mark the block as missing if all the replicas are on 
DecommissionED nodes since we can take the Decommissioned node out of rotation 
anytime.
We have seen multiple cases in which all the replicas are in Decommissioned 
nodes.
Any thoughts ?

> Reporting of missing blocks is different in fsck and namenode ui/metasave
> -
>
> Key: HDFS-8872
> URL: https://issues.apache.org/jira/browse/HDFS-8872
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>
> Namenode ui and metasave will not report a block as missing if the only 
> replica is on decommissioning/decomissioned node while fsck will show it as 
> MISSING.
> Since decommissioned node can be formatted/removed anytime, we can actually 
> lose the block.
> Its better to alert on namenode ui if the only copy is on 
> decomissioned/decommissioning node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7374) Allow decommissioning of dead DataNodes

2016-05-23 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296755#comment-15296755
 ] 

John Zhuge commented on HDFS-7374:
--

{color:red}WARNING{color}: as already pointed out by [~xyao], due to a typo in 
HDFS-7374 commit message, there are 2 commits with the message prefix 
"HDFS-7373":
{noformat}
c0d666c HDFS-7373. Clean up temporary files after fsimage transfer failures. 
Contributed by Kihwal Lee
5bd048e HDFS-7373. Allow decommissioning of dead DataNodes. Contributed by Zhe 
Zhang.
{noformat}

> Allow decommissioning of dead DataNodes
> ---
>
> Key: HDFS-7374
> URL: https://issues.apache.org/jira/browse/HDFS-7374
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Fix For: 2.7.0
>
> Attachments: HDFS-7374-001.patch, HDFS-7374-002.patch, 
> HDFS-7374.003.patch
>
>
> We have seen the use case of decommissioning DataNodes that are already dead 
> or unresponsive, and not expected to rejoin the cluster.
> The logic introduced by HDFS-6791 will mark those nodes as 
> {{DECOMMISSION_INPROGRESS}}, with a hope that they can come back and finish 
> the decommission work. If an upper layer application is monitoring the 
> decommissioning progress, it will hang forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7373) Clean up temporary files after fsimage transfer failures

2016-05-23 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296750#comment-15296750
 ] 

John Zhuge commented on HDFS-7373:
--

{color:red}WARNING{color}: due to a typo in HDFS-7374 commit message, there are 
2 commits with the message prefix "HDFS-7373":
{noformat}
c0d666c HDFS-7373. Clean up temporary files after fsimage transfer failures. 
Contributed by Kihwal Lee
5bd048e HDFS-7373. Allow decommissioning of dead DataNodes. Contributed by Zhe 
Zhang.
{noformat}

> Clean up temporary files after fsimage transfer failures
> 
>
> Key: HDFS-7373
> URL: https://issues.apache.org/jira/browse/HDFS-7373
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 2.7.0
>
> Attachments: HDFS-7373.patch
>
>
> When a fsimage (e.g. checkpoint) transfer fails, a temporary file is left in 
> each storage directory.  If the size of name space is large, these files can 
> take up quite a bit of space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10366) libhdfs++: Add SASL authentication

2016-05-23 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-10366:
--
Attachment: HDFS-10366.HDFS-8707.000.patch

Current patch introduces the sasl_protocol/sasl_engine framework, and has been 
tested with GSASL, if available on the building machine.

HDFS-10450 will add Cyrus SASL, an ASF-compatible SASL engine.

I'm looking in to how we can test with a mini cluster and kerberos.  I'll 
capture that in a separate bug.

> libhdfs++: Add SASL authentication
> --
>
> Key: HDFS-10366
> URL: https://issues.apache.org/jira/browse/HDFS-10366
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-10366.HDFS-8707.000.patch
>
>
> Enable communication with HDFS clusters that have KERBEROS authentication 
> enabled; use tokens from NN when communicating with DN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10451) libhdfs++: Look up kerberos principal by username

2016-05-23 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-10451:
-

 Summary: libhdfs++: Look up kerberos principal by username
 Key: HDFS-10451
 URL: https://issues.apache.org/jira/browse/HDFS-10451
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen


SaslProtocol::Negotiate passes the user name directly to the sasl_engine for 
authentication; the SASL engines require that.

HDFS maps princpals to usernames by stripping off the realm and hostname.  We 
should query the ccache for all available tickets, and find the one that best 
matches the passed-in username using the HDFS semantics.  e.g. if the username 
is client1, and we have a ticket for client1/machine1.foo@foo.com, we 
should use that ticket.

If multiple tickets match, the one that most exactly matches the username 
(host, realm) should be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10450) libhdfs++: Implement Cyrus SASL implementation in sasl_enigne.cc

2016-05-23 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-10450:
-

 Summary: libhdfs++: Implement Cyrus SASL implementation in 
sasl_enigne.cc
 Key: HDFS-10450
 URL: https://issues.apache.org/jira/browse/HDFS-10450
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen


The current sasl_engine implementation was proven out using GSASL, which is 
does not have an ASF-approved license.  It included a framework to use Cyrus 
SASL (libsasl2.so) instead; we should complete that implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10441) libhdfs++: HA namenode support

2016-05-23 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296249#comment-15296249
 ] 

Bob Hansen commented on HDFS-10441:
---

I realize that this was a "preview patch," and that there was a lot of cruft.  
I just wanted to give a nice checklist of "don't forget this before the Real 
Patch" lands.  Checklists are awesome.

> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9650) Problem is logging of "Redundant addStoredBlock request received"

2016-05-23 Thread Chackaravarthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296209#comment-15296209
 ] 

Chackaravarthy commented on HDFS-9650:
--

I changed the log statement to 'debug' level (restarting NN) and was able to do 
rolling restart of DN. The problem is that log level was WARN. Hence reducing 
log level resolved the problem. I could see this change is already available in 
Trunk code.

Seems HDFS-9906 fixes this. It's available in 2.8.0.

> Problem is logging of "Redundant addStoredBlock request received"
> -
>
> Key: HDFS-9650
> URL: https://issues.apache.org/jira/browse/HDFS-9650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Frode Halvorsen
>
> Description;
> Hadoop 2.7.1. 2 namenodes in HA. 14 Datanodes.
> Enough CPU,disk and RAM.
> Just discovered that some datanodes must have been corrupted somehow.
> When restarting  a 'defect' ( works without failure except when restarting) 
> the active namenode suddenly is logging a lot of : "Redundant addStoredBlock 
> request received"
> and finally the failover-controller takes the namenode down, fails over to 
> other node. This node also starts logging the same, and as soon as the fisrt 
> node is bac online, the failover-controller again kill the active node, and 
> does failover.
> This node now was started after the datanode, and doesn't log "Redundant 
> addStoredBlock request received" anymore, and restart of the second name-node 
> works fine.
> If I again restarts the datanode- the process repeats itself.
> Problem is logging of "Redundant addStoredBlock request received" and why 
> does it happen ? 
> The failover-controller acts the same way as it did on 2.5/6 when we had a 
> lot of 'block does not belong to any replica'-messages. Namenode is too busy 
> to respond to heartbeats, and is taken down...
> To resolve this, I have to take down the datanode, delete all data from it, 
> and start it up. Then cluster will reproduce the missing blocks, and the 
> failing datanode is working fine again...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-05-23 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296179#comment-15296179
 ] 

Vinayakumar B commented on HDFS-10440:
--

As pointed out by [~kihwal], I think you can put blockpools' information in a 
separate table, because There will be multiple block pools with their own 
namenode actor threads. 

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0, 2.6.0, 2.7.0
>Reporter: Weiwei Yang
> Attachments: datanode_UI_mockup.jpg, dn_UI_logs.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Running state (active, decommissioning, decommissioned or lost etc)
> * Summary (blocks, capacity, storage etc)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10434) Fix intermittent test failure of TestDataNodeErasureCodingMetrics

2016-05-23 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296176#comment-15296176
 ] 

Li Bo commented on HDFS-10434:
--

Thanks for [~rakeshr]'s detailed explanation.  The situation described will 
cause the test case fail and the patch can fix the problem.  +1 for the patch.

> Fix intermittent test failure of TestDataNodeErasureCodingMetrics
> -
>
> Key: HDFS-10434
> URL: https://issues.apache.org/jira/browse/HDFS-10434
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10434-00.patch
>
>
> This jira is to fix the test case failure.
> Reference : 
> [Build15485_TestDataNodeErasureCodingMetrics_testEcTasks|https://builds.apache.org/job/PreCommit-HDFS-Build/15485/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testEcTasks/]
> {code}
> Error Message
> Bad value for metric EcReconstructionTasks expected:<1> but was:<0>
> Stacktrace
> java.lang.AssertionError: Bad value for metric EcReconstructionTasks 
> expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:228)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testEcTasks(TestDataNodeErasureCodingMetrics.java:92)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9650) Problem is logging of "Redundant addStoredBlock request received"

2016-05-23 Thread Frode Halvorsen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296169#comment-15296169
 ] 

Frode Halvorsen commented on HDFS-9650:
---

Not resolved in 2.7.2. We still have same issues every time we restart DN's

> Problem is logging of "Redundant addStoredBlock request received"
> -
>
> Key: HDFS-9650
> URL: https://issues.apache.org/jira/browse/HDFS-9650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Frode Halvorsen
>
> Description;
> Hadoop 2.7.1. 2 namenodes in HA. 14 Datanodes.
> Enough CPU,disk and RAM.
> Just discovered that some datanodes must have been corrupted somehow.
> When restarting  a 'defect' ( works without failure except when restarting) 
> the active namenode suddenly is logging a lot of : "Redundant addStoredBlock 
> request received"
> and finally the failover-controller takes the namenode down, fails over to 
> other node. This node also starts logging the same, and as soon as the fisrt 
> node is bac online, the failover-controller again kill the active node, and 
> does failover.
> This node now was started after the datanode, and doesn't log "Redundant 
> addStoredBlock request received" anymore, and restart of the second name-node 
> works fine.
> If I again restarts the datanode- the process repeats itself.
> Problem is logging of "Redundant addStoredBlock request received" and why 
> does it happen ? 
> The failover-controller acts the same way as it did on 2.5/6 when we had a 
> lot of 'block does not belong to any replica'-messages. Namenode is too busy 
> to respond to heartbeats, and is taken down...
> To resolve this, I have to take down the datanode, delete all data from it, 
> and start it up. Then cluster will reproduce the missing blocks, and the 
> failing datanode is working fine again...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10434) Fix intermittent test failure of TestDataNodeErasureCodingMetrics

2016-05-23 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296157#comment-15296157
 ] 

Rakesh R commented on HDFS-10434:
-

Thanks [~libo-intel], we are updating the dn metrics at the finally block of 
{{StripedReconstructor}} thread as shown below. The failure occurs because 
{{StripedFileTestUtil.waitForReconstructionFinished()}} is waiting for the 
block recovery but not waiting to finish executing the 
StripedReconstructor#run() finally block section. Probably can try debugging 
the failed test case {{TestDataNodeErasureCodingMetrics#testEcTasks}} by 
putting a break point at the finally block and you can see 
{{StripedFileTestUtil.waitForReconstructionFinished(file, fs, GROUPSIZE);}} is 
coming out and failing the test case. To fix this, I added grace period so that 
the thread will get a chance to execute the finally block to update the metrics 
data.

StripedReconstructor.java
{code}
} finally {
  datanode.decrementXmitsInProgress();
  datanode.getMetrics().incrECReconstructionTasks();
  stripedReader.close();
  stripedWriter.close();
{code}


> Fix intermittent test failure of TestDataNodeErasureCodingMetrics
> -
>
> Key: HDFS-10434
> URL: https://issues.apache.org/jira/browse/HDFS-10434
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10434-00.patch
>
>
> This jira is to fix the test case failure.
> Reference : 
> [Build15485_TestDataNodeErasureCodingMetrics_testEcTasks|https://builds.apache.org/job/PreCommit-HDFS-Build/15485/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testEcTasks/]
> {code}
> Error Message
> Bad value for metric EcReconstructionTasks expected:<1> but was:<0>
> Stacktrace
> java.lang.AssertionError: Bad value for metric EcReconstructionTasks 
> expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:228)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testEcTasks(TestDataNodeErasureCodingMetrics.java:92)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10440) Improve DataNode web UI

2016-05-23 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296149#comment-15296149
 ] 

Weiwei Yang commented on HDFS-10440:


Thanks [~kihwal] I agree. I just attached some UI mockups, let me know if you 
have any comments.

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0, 2.6.0, 2.7.0
>Reporter: Weiwei Yang
> Attachments: datanode_UI_mockup.jpg, dn_UI_logs.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Running state (active, decommissioning, decommissioned or lost etc)
> * Summary (blocks, capacity, storage etc)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-05-23 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Attachment: dn_UI_logs.jpg

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0, 2.6.0, 2.7.0
>Reporter: Weiwei Yang
> Attachments: datanode_UI_mockup.jpg, dn_UI_logs.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Running state (active, decommissioning, decommissioned or lost etc)
> * Summary (blocks, capacity, storage etc)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-05-23 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Attachment: datanode_UI_mockup.jpg

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0, 2.6.0, 2.7.0
>Reporter: Weiwei Yang
> Attachments: datanode_UI_mockup.jpg, dn_UI_logs.jpg
>
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Running state (active, decommissioning, decommissioned or lost etc)
> * Summary (blocks, capacity, storage etc)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10440) Improve DataNode web UI

2016-05-23 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-10440:
---
Summary: Improve DataNode web UI  (was: Add more information to DataNode 
web UI)

> Improve DataNode web UI
> ---
>
> Key: HDFS-10440
> URL: https://issues.apache.org/jira/browse/HDFS-10440
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.5.0, 2.6.0, 2.7.0
>Reporter: Weiwei Yang
>
> At present, datanode web UI doesn't have much information except for node 
> name and port. Propose to add more information similar to namenode UI, 
> including, 
> * Static info (version, block pool  and cluster ID)
> * Running state (active, decommissioning, decommissioned or lost etc)
> * Summary (blocks, capacity, storage etc)
> * Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10434) Fix intermittent test failure of TestDataNodeErasureCodingMetrics

2016-05-23 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296112#comment-15296112
 ] 

Li Bo commented on HDFS-10434:
--

Thanks for [~rakeshr] finding the problem. 
{{DFSTestUtil.waitForDatanodeState()}} and 
{{StripedFileTestUtil.waitForReconstructionFinished()}} have make sure that the 
reconstruction work is finished before checking the metrics. I am confused that 
the two sentences not take effect.  Does the failure never happen after 
applying the patch(maybe run the test case more than 20 times)?  

> Fix intermittent test failure of TestDataNodeErasureCodingMetrics
> -
>
> Key: HDFS-10434
> URL: https://issues.apache.org/jira/browse/HDFS-10434
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10434-00.patch
>
>
> This jira is to fix the test case failure.
> Reference : 
> [Build15485_TestDataNodeErasureCodingMetrics_testEcTasks|https://builds.apache.org/job/PreCommit-HDFS-Build/15485/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeErasureCodingMetrics/testEcTasks/]
> {code}
> Error Message
> Bad value for metric EcReconstructionTasks expected:<1> but was:<0>
> Stacktrace
> java.lang.AssertionError: Bad value for metric EcReconstructionTasks 
> expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:228)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics.testEcTasks(TestDataNodeErasureCodingMetrics.java:92)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10449) TestRollingFileSystemSinkWithHdfs#testFailedClose() fails on branch-2

2016-05-23 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296055#comment-15296055
 ] 

Takanobu Asanuma commented on HDFS-10449:
-

I'd like to work on this jira, but I can't assign to myself now...

> TestRollingFileSystemSinkWithHdfs#testFailedClose() fails on branch-2
> -
>
> Key: HDFS-10449
> URL: https://issues.apache.org/jira/browse/HDFS-10449
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
> Environment: jenkins
>Reporter: Takanobu Asanuma
>
> {noformat}
> Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.263 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
> testFailedClose(org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs)
>   Time elapsed: 8.729 sec  <<< FAILURE!
> java.lang.AssertionError: No exception was generated while stopping sink even 
> though HDFS was unavailable
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs.testFailedClose(TestRollingFileSystemSinkWithHdfs.java:187)
> {noformat}
> This passes fine on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9650) Problem is logging of "Redundant addStoredBlock request received"

2016-05-23 Thread Chackaravarthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296037#comment-15296037
 ] 

Chackaravarthy commented on HDFS-9650:
--

[~frha] Have you resolved this problem or able to root cause it? I am also 
hitting a similar issue. DataNode restart leads to these logs in NameNode 
making serviceRPC latency huge. So not able to perform DN restarts.

> Problem is logging of "Redundant addStoredBlock request received"
> -
>
> Key: HDFS-9650
> URL: https://issues.apache.org/jira/browse/HDFS-9650
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Frode Halvorsen
>
> Description;
> Hadoop 2.7.1. 2 namenodes in HA. 14 Datanodes.
> Enough CPU,disk and RAM.
> Just discovered that some datanodes must have been corrupted somehow.
> When restarting  a 'defect' ( works without failure except when restarting) 
> the active namenode suddenly is logging a lot of : "Redundant addStoredBlock 
> request received"
> and finally the failover-controller takes the namenode down, fails over to 
> other node. This node also starts logging the same, and as soon as the fisrt 
> node is bac online, the failover-controller again kill the active node, and 
> does failover.
> This node now was started after the datanode, and doesn't log "Redundant 
> addStoredBlock request received" anymore, and restart of the second name-node 
> works fine.
> If I again restarts the datanode- the process repeats itself.
> Problem is logging of "Redundant addStoredBlock request received" and why 
> does it happen ? 
> The failover-controller acts the same way as it did on 2.5/6 when we had a 
> lot of 'block does not belong to any replica'-messages. Namenode is too busy 
> to respond to heartbeats, and is taken down...
> To resolve this, I have to take down the datanode, delete all data from it, 
> and start it up. Then cluster will reproduce the missing blocks, and the 
> failing datanode is working fine again...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8057) Move BlockReader implementation to the client implementation package

2016-05-23 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15296002#comment-15296002
 ] 

Takanobu Asanuma commented on HDFS-8057:


Sure, I checked the test results.

* {{TestFileTruncate.testUpgradeAndRestart}} and 
{{TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks}} are passed in my 
laptop.
* {{TestDistributedFileSystem.testDFSCloseOrdering}} has already been filed in 
HDFS-10415
* {{TestRollingFileSystemSinkWithHdfs.testFailedClose}} is also failed in 
branch-2.  I filed it in HDFS-10449.

I think the all failed tests are not related to this patch. Could you review it?

> Move BlockReader implementation to the client implementation package
> 
>
> Key: HDFS-8057
> URL: https://issues.apache.org/jira/browse/HDFS-8057
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Takanobu Asanuma
> Attachments: HDFS-8057.1.patch, HDFS-8057.2.patch, HDFS-8057.3.patch, 
> HDFS-8057.4.patch, HDFS-8057.branch-2.001.patch, HDFS-8057.branch-2.5.patch
>
>
> BlockReaderLocal, RemoteBlockReader, etc should be moved to 
> org.apache.hadoop.hdfs.client.impl.  We may as well rename RemoteBlockReader 
> to BlockReaderRemote.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15295986#comment-15295986
 ] 

Hadoop QA commented on HDFS-10390:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 4s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 18s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 102m 0s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestBpServiceActorScheduler |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805571/HDFS-10390-HDFS-9924.009.patch
 |
| JIRA Issue | HDFS-10390 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e2a54529a589 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6161d9b |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15524/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15524/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15524/testReport/ |
| modules | C:  hadoop-hdfs-project/hadoop-hdfs-client   

[jira] [Created] (HDFS-10449) TestRollingFileSystemSinkWithHdfs#testFailedClose() fails on branch-2

2016-05-23 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-10449:
---

 Summary: TestRollingFileSystemSinkWithHdfs#testFailedClose() fails 
on branch-2
 Key: HDFS-10449
 URL: https://issues.apache.org/jira/browse/HDFS-10449
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
 Environment: jenkins
Reporter: Takanobu Asanuma


{noformat}
Running org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.263 sec <<< 
FAILURE! - in org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs
testFailedClose(org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs)
  Time elapsed: 8.729 sec  <<< FAILURE!
java.lang.AssertionError: No exception was generated while stopping sink even 
though HDFS was unavailable
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs.testFailedClose(TestRollingFileSystemSinkWithHdfs.java:187)
{noformat}

This passes fine on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org