[jira] [Updated] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10390:
-
Attachment: HDFS-10390-HDFS-9924.005.patch

v005 fixed some test issues.

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch, HDFS-10390-HDFS-9924.004.patch, 
> HDFS-10390-HDFS-9924.005.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10430) Refactor FileSystem#checkAccessPermissions for better reuse from tests

2016-05-18 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290471#comment-15290471
 ] 

Xiaobing Zhou commented on HDFS-10430:
--

Thanks [~boky01] for comment.
Changing it to public could be one way, however, it's been annotated as 
@InterfaceAudience.Private. It'd be better to add the public counterpart in 
DistributedFileSystem and delegate it to FileSystem#checkAccessPermissions.

> Refactor FileSystem#checkAccessPermissions for better reuse from tests
> --
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>
> FileSystem#checkAccessPermissions could be used in a bunch of tests from 
> different projects, but it's in hadoop-common, which is not visible in some 
> cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-05-18 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290464#comment-15290464
 ] 

Xiaobing Zhou commented on HDFS-9924:
-

Thank you [~mingma].
{quote} BTW, is the API design somewhat independent of whether we use thread 
pool or async RPC client?
{quote}
No, async DFS API goes to async RPC path. Like you said, thread pool is from 
API point of view, instead of RPC. Of course, thread pool can also be used to 
schedule async DFS calls if it comes with benefits.

[~andrew.wang] I will post performance numbers soon. Thanks.

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290396#comment-15290396
 ] 

Hadoop QA commented on HDFS-9782:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 22s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 19s 
{color} | {color:red} root: patch generated 4 new + 14 unchanged - 5 fixed = 18 
total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 39s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 18s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 105m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804826/HDFS-9782.008.patch |
| JIRA Issue | HDFS-9782 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a81ddede2745 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 010e6ac |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15489/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15489/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15489/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  

[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290322#comment-15290322
 ] 

Hadoop QA commented on HDFS-10390:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestAsyncDFSRename |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804799/HDFS-10390-HDFS-9924.004.patch
 |
| JIRA Issue | HDFS-10390 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 037bafd1585b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 010e6ac |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15488/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15488/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15488/testReport/ |
| modules | C:  hadoop-hdfs-project/hadoop-hdfs-client   
hadoop-hdfs-project/hadoop-hdfs  U: hadoop-hdfs-project |
| Console 

[jira] [Comment Edited] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-18 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290300#comment-15290300
 ] 

Yiqun Lin edited comment on HDFS-10400 at 5/19/16 2:21 AM:
---

{quote}
So I was wondering maybe the hdfs dfs -put was able to recover and finish 
successfully.
{quote}
Agree with [~knoguchi].


was (Author: linyiqun):
{quote}
So I was wondering maybe the hdfs dfs -put was able to recover and finish 
successfully.
{qupte}
Agree with [~knoguchi].

> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Yiqun Lin
> Attachments: HDFS-10400.001.patch, HDFS-10400.002.patch
>
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-18 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290300#comment-15290300
 ] 

Yiqun Lin commented on HDFS-10400:
--

{quote}
So I was wondering maybe the hdfs dfs -put was able to recover and finish 
successfully.
{qupte}
Agree with [~knoguchi].

> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Yiqun Lin
> Attachments: HDFS-10400.001.patch, HDFS-10400.002.patch
>
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-05-18 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290292#comment-15290292
 ] 

Ming Ma commented on HDFS-9924:
---

It seems the thread-pool based solution is a layer on the top of FileSystem 
abstraction, thus it is general for all FileSystems.  BTW, is the API design 
somewhat independent of whether we use thread pool or async RPC client?

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2016-05-18 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290279#comment-15290279
 ] 

Yiqun Lin commented on HDFS-10429:
--

Hi, [~aplusplus], good catch!  Thanks for reporting this! The patch looks good 
to me.

> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
>Priority: Minor
> Attachments: HDFS-10429.1.patch
>
>
> Every time I use 'hdfs dfs -put' upload file, this warning is printed:
> {code:java}
> 16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Thread.join(Thread.java:1245)
>   at java.lang.Thread.join(Thread.java:1319)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
>   at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
> {code}
> The reason is this: originally, DataStreamer::closeResponder always prints a 
> warning about InterruptedException; since HDFS-9812, 
> DFSOutputStream::closeImpl  always forces threads to close, which causes 
> InterruptedException.
> A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290266#comment-15290266
 ] 

Hadoop QA commented on HDFS-10188:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 32s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
59s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 5s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 6s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 53s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 49s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 45s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804762/HDFS-10188.HDFS-8707.004.patch
 |
| JIRA Issue | HDFS-10188 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 7392a108dd8b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / d187112 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15487/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15487/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  

[jira] [Commented] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-18 Thread Koji Noguchi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290236#comment-15290236
 ] 

Koji Noguchi commented on HDFS-10400:
-

My point was, the INFO message on the description 
{noformat}
16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]
{noformat}
does not say the write failed.  It says it just failed writing to one datanode. 
So I was wondering maybe the hdfs dfs -put was able to recover and finish 
successfully.


> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Yiqun Lin
> Attachments: HDFS-10400.001.patch, HDFS-10400.002.patch
>
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2016-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290221#comment-15290221
 ] 

Hadoop QA commented on HDFS-10429:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 22s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804790/HDFS-10429.1.patch |
| JIRA Issue | HDFS-10429 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a50c6e86b0f1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 010e6ac |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15486/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15486/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
>Priority: Minor
> Attachments: HDFS-10429.1.patch
>
>
> Every time I use 'hdfs dfs -put' upload 

[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-05-18 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290182#comment-15290182
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9924:
---

> Also, these patches are already landing in branch-2 and trunk while there are 
> outstanding API questions. Can you please revert and move them to a branch? 
> I'm -1 on releasing this in 2.8 since that locks the API.

The patches committed only provides internal @Unstable API.  So it does not 
lock the API.  No?

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-18 Thread Jo Desmet (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290165#comment-15290165
 ] 

Jo Desmet commented on HDFS-10400:
--

{quote}
Jo Desmet, how did you verify that dfs -put failed?
{quote}

Unix, simply using {{$?}}. I did confirm that I was able to detect other error 
conditions, like source file non-existing. Additionally when the untrapped 
exception occurred, I also verified that the file was either missing or 
incomplete at the target location. [~linyiqun], checking checksum seems to be 
extreme if an exception is somehow already generated?

> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Yiqun Lin
> Attachments: HDFS-10400.001.patch, HDFS-10400.002.patch
>
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-18 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290112#comment-15290112
 ] 

Daniel Templeton commented on HDFS-9782:


Thanks for the review, [~kasha]!

bq. Is the empty constructor so Reflection works?

Yep.  I added JavaDocs to clarify that.

The rest of your points should be addressed by the latest patch.

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch, HDFS-9782.008.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-18 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-9782:
---
Status: Patch Available  (was: Open)

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch, HDFS-9782.008.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-18 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-9782:
---
Attachment: HDFS-9782.008.patch

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch, HDFS-9782.008.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10430) Refactor FileSystem#checkAccessPermissions for better reuse from tests

2016-05-18 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290009#comment-15290009
 ] 

Andras Bokor commented on HDFS-10430:
-

The Access Control Modifier of the method is default (visible in the package). 
With changing to public it will be visible from everywhere where hadoop-common 
is available.
What do you mean here? Change it to public?

> Refactor FileSystem#checkAccessPermissions for better reuse from tests
> --
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>
> FileSystem#checkAccessPermissions could be used in a bunch of tests from 
> different projects, but it's in hadoop-common, which is not visible in some 
> cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-18 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289972#comment-15289972
 ] 

Xiaowei Zhu commented on HDFS-10188:


+1 on the patch HDFS-10188.HDFS-8707.004.patch. I misunderstood the 
class-specific new/delete usage before. 

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch, 
> HDFS-10188.HDFS-8707.003.patch, HDFS-10188.HDFS-8707.004.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10431) Refactor Async DFS related tests to reuse shared instance of AsyncDistributedFileSystem instance to speed up tests

2016-05-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10431:
-
Description: Limit of max async calls(i.e. ipc.client.async.calls.max) is 
set and cached in ipc.Client. Client instances are cached based on 
SocketFactory. In order to test different cases in various limits, every test 
(e.g. TestAsyncDFSRename and TestAsyncDFS) creates separate instance of 
MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not 
efficient in that tests may take long time to bootstrap MiniDFSClusters. It's 
even worse if cluster needs to restart in the middle. This proposes to do 
refactoring to use shared instance of AsyncDistributedFileSystem for speedup.  
(was: Limit of max async calls(i.e. ipc.client.async.calls.max) is set and 
cached in ipc.Client. Client instances are cached based on SocketFactory. In 
order to test different cases in various limits, every test creates separate 
instance of MiniDFSCluster and that of AsyncDistributedFileSystem hence. This 
is not efficient in that tests may take long time to bootstrap MiniDFSClusters. 
This proposes to do refactoring to use shared instance of 
AsyncDistributedFileSystem for speedup.)

> Refactor Async DFS related tests to reuse shared instance of 
> AsyncDistributedFileSystem instance to speed up tests
> --
>
> Key: HDFS-10431
> URL: https://issues.apache.org/jira/browse/HDFS-10431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>
> Limit of max async calls(i.e. ipc.client.async.calls.max) is set and cached 
> in ipc.Client. Client instances are cached based on SocketFactory. In order 
> to test different cases in various limits, every test (e.g. 
> TestAsyncDFSRename and TestAsyncDFS) creates separate instance of 
> MiniDFSCluster and that of AsyncDistributedFileSystem hence. This is not 
> efficient in that tests may take long time to bootstrap MiniDFSClusters. It's 
> even worse if cluster needs to restart in the middle. This proposes to do 
> refactoring to use shared instance of AsyncDistributedFileSystem for speedup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

  Assignee: (was: Andras Bokor)
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-1073)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10433) Make retry also works well for Async DFS

2016-05-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10433:
-
Description: In current Async DFS implementation, file system calls are 
invoked and returns Future immediately to clients. Clients call Future#get 
to retrieve final results. Future#get internally invokes a chain of callbacks 
residing in ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and 
ipc.Client. The callback path bypasses the original retry layer/logic designed 
for synchronous DFS. This proposes refactoring to make retry also works for 
Async DFS.

> Make retry also works well for Async DFS
> 
>
> Key: HDFS-10433
> URL: https://issues.apache.org/jira/browse/HDFS-10433
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>
> In current Async DFS implementation, file system calls are invoked and 
> returns Future immediately to clients. Clients call Future#get to retrieve 
> final results. Future#get internally invokes a chain of callbacks residing in 
> ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and ipc.Client. The 
> callback path bypasses the original retry layer/logic designed for 
> synchronous DFS. This proposes refactoring to make retry also works for Async 
> DFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-05-18 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289952#comment-15289952
 ] 

Andrew Wang commented on HDFS-9924:
---

In my earlier comment, I asked for the following content to be covered in the 
document:

bq. It'd help to go over the pros/cons of the different API options. 
ListenableFuture for instance has also been brought up. Reviewing some other 
async RPC interfaces for comparison would also be helpful. This design doc is 
also the place to discuss Colin's question about performance compared to a 
thread pool. If that option is available to us, it's preferable since it does 
not involve expanding the API.

Neither of the two things I asked for are present in this design doc.

On the topic of API, I don't want to pigeonhole us with the Java Future API. As 
others have mentioned here, it doesn't allow for callback chaining, which makes 
it much less useful for async-style programming. Other popular async APIs use 
StumbleUpon's Deferred to support callbacks (HBase, Kudu). If we were to use 
Deferred, we should shade it so it doesn't lead to any classpath issues.

Another sensible choice would be CompletableFuture from JDK8. This means 
AsyncFileSystem would be 3.x-only, but considering we're actively trying to 
release 3.x, it's not a bad release vehicle. This also would give downstreams 
time to try it out and give feedback.

On the topic of performance, can you please provide benchmarks?

Also, these patches are already landing in branch-2 and trunk while there are 
outstanding API questions. Can you please revert and move them to a branch? I'm 
-1 on releasing this in 2.8 since that locks the API.

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10433) Make retry also works well for Async DFS

2016-05-18 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-10433:


 Summary: Make retry also works well for Async DFS
 Key: HDFS-10433
 URL: https://issues.apache.org/jira/browse/HDFS-10433
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Reporter: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10431) Refactor Async DFS related tests to reuse shared instance of AsyncDistributedFileSystem instance to speed up tests

2016-05-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10431:
-
Description: Limit of max async calls(i.e. ipc.client.async.calls.max) is 
set and cached in ipc.Client. Client instances are cached based on 
SocketFactory. In order to test different cases in various limits, every test 
creates separate instance of MiniDFSCluster and that of 
AsyncDistributedFileSystem hence. This is not efficient in that tests may take 
long time to bootstrap MiniDFSClusters. This proposes to do refactoring to use 
shared instance of AsyncDistributedFileSystem for speedup.  (was: Limit of max 
async calls(i.e. ipc.client.async.calls.max) is set and cached in ipc.Client. 
Client instances are cached based on SocketFactory. In order to test different 
cases in various limits, every test creates separate instance of MiniDFSCluster 
and that of AsyncDistributedFileSystem hence. This is not efficient in that 
tests may take long time to bootstrap MiniDFSClusters. This proposes to do 
refactoring to use shared AsyncDistributedFileSystem)

> Refactor Async DFS related tests to reuse shared instance of 
> AsyncDistributedFileSystem instance to speed up tests
> --
>
> Key: HDFS-10431
> URL: https://issues.apache.org/jira/browse/HDFS-10431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>
> Limit of max async calls(i.e. ipc.client.async.calls.max) is set and cached 
> in ipc.Client. Client instances are cached based on SocketFactory. In order 
> to test different cases in various limits, every test creates separate 
> instance of MiniDFSCluster and that of AsyncDistributedFileSystem hence. This 
> is not efficient in that tests may take long time to bootstrap 
> MiniDFSClusters. This proposes to do refactoring to use shared instance of 
> AsyncDistributedFileSystem for speedup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10431) Refactor Async DFS related tests to reuse shared instance of AsyncDistributedFileSystem instance to speed up tests

2016-05-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10431:
-
Summary: Refactor Async DFS related tests to reuse shared instance of 
AsyncDistributedFileSystem instance to speed up tests  (was: Refactor Async DFS 
related tests to reuse shared AsyncDistributedFileSystem instance to speed up 
tests)

> Refactor Async DFS related tests to reuse shared instance of 
> AsyncDistributedFileSystem instance to speed up tests
> --
>
> Key: HDFS-10431
> URL: https://issues.apache.org/jira/browse/HDFS-10431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>
> Limit of max async calls(i.e. ipc.client.async.calls.max) is set and cached 
> in ipc.Client. Client instances are cached based on SocketFactory. In order 
> to test different cases in various limits, every test creates separate 
> instance of MiniDFSCluster and that of AsyncDistributedFileSystem hence. This 
> is not efficient in that tests may take long time to bootstrap 
> MiniDFSClusters. This proposes to do refactoring to use shared 
> AsyncDistributedFileSystem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10431) Refactor Async DFS related tests to reuse shared AsyncDistributedFileSystem instance to speed up tests

2016-05-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10431:
-
Description: Limit of max async calls(i.e. ipc.client.async.calls.max) is 
set and cached in ipc.Client. Client instances are cached based on 
SocketFactory. In order to test different cases in various limits, every test 
creates separate instance of MiniDFSCluster and that of 
AsyncDistributedFileSystem hence. This is not efficient in that tests may take 
long time to bootstrap MiniDFSClusters. This proposes to do refactoring to use 
shared 

> Refactor Async DFS related tests to reuse shared AsyncDistributedFileSystem 
> instance to speed up tests
> --
>
> Key: HDFS-10431
> URL: https://issues.apache.org/jira/browse/HDFS-10431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>
> Limit of max async calls(i.e. ipc.client.async.calls.max) is set and cached 
> in ipc.Client. Client instances are cached based on SocketFactory. In order 
> to test different cases in various limits, every test creates separate 
> instance of MiniDFSCluster and that of AsyncDistributedFileSystem hence. This 
> is not efficient in that tests may take long time to bootstrap 
> MiniDFSClusters. This proposes to do refactoring to use shared 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10431) Refactor Async DFS related tests to reuse shared AsyncDistributedFileSystem instance to speed up tests

2016-05-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10431:
-
Description: Limit of max async calls(i.e. ipc.client.async.calls.max) is 
set and cached in ipc.Client. Client instances are cached based on 
SocketFactory. In order to test different cases in various limits, every test 
creates separate instance of MiniDFSCluster and that of 
AsyncDistributedFileSystem hence. This is not efficient in that tests may take 
long time to bootstrap MiniDFSClusters. This proposes to do refactoring to use 
shared AsyncDistributedFileSystem  (was: Limit of max async calls(i.e. 
ipc.client.async.calls.max) is set and cached in ipc.Client. Client instances 
are cached based on SocketFactory. In order to test different cases in various 
limits, every test creates separate instance of MiniDFSCluster and that of 
AsyncDistributedFileSystem hence. This is not efficient in that tests may take 
long time to bootstrap MiniDFSClusters. This proposes to do refactoring to use 
shared )

> Refactor Async DFS related tests to reuse shared AsyncDistributedFileSystem 
> instance to speed up tests
> --
>
> Key: HDFS-10431
> URL: https://issues.apache.org/jira/browse/HDFS-10431
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>
> Limit of max async calls(i.e. ipc.client.async.calls.max) is set and cached 
> in ipc.Client. Client instances are cached based on SocketFactory. In order 
> to test different cases in various limits, every test creates separate 
> instance of MiniDFSCluster and that of AsyncDistributedFileSystem hence. This 
> is not efficient in that tests may take long time to bootstrap 
> MiniDFSClusters. This proposes to do refactoring to use shared 
> AsyncDistributedFileSystem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10432) "dfs.namenode.hosts" and "dfs.namenode.host.exclude" deprecated?

2016-05-18 Thread Ernesto Peralta (JIRA)
Ernesto Peralta created HDFS-10432:
--

 Summary: "dfs.namenode.hosts" and  "dfs.namenode.host.exclude" 
deprecated?
 Key: HDFS-10432
 URL: https://issues.apache.org/jira/browse/HDFS-10432
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.6.2
Reporter: Ernesto Peralta
Priority: Trivial


Hello,

-https://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-common/ClusterSetup.html
 

we can see In the configurations for NameNode from "hdfs-site.xml", the 
parameters "dfs.namenode.hosts" and "dfs.namenode.hosts.exclude". 
But, the parameter "dfs.namenode.hosts" and "dfs.namenode.host.exclude" don´t 
appear in "hdfs-default.xml" file. 

Are the parameters "dfs.hosts" and "dfs.hosts.exclude" more adequate and right? 
Are  the parameters "dfs.namenode.hosts" and  "dfs.namenode.host.exclude" 
deprecated?

-hdfs-default.xml, parameters:
http://hadoop.apache.org/docs/r2.6.2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10431) Refactor Async DFS related tests to reuse shared AsyncDistributedFileSystem instance to speed up tests

2016-05-18 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-10431:


 Summary: Refactor Async DFS related tests to reuse shared 
AsyncDistributedFileSystem instance to speed up tests
 Key: HDFS-10431
 URL: https://issues.apache.org/jira/browse/HDFS-10431
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Reporter: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10430) Refactor FileSystem#checkAccessPermissions for better reuse from tests

2016-05-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10430:
-
Description: FileSystem#checkAccessPermissions could be used in a bunch of 
tests from different projects, but it's in hadoop-common, which is not visible 
in some cases.

> Refactor FileSystem#checkAccessPermissions for better reuse from tests
> --
>
> Key: HDFS-10430
> URL: https://issues.apache.org/jira/browse/HDFS-10430
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>
> FileSystem#checkAccessPermissions could be used in a bunch of tests from 
> different projects, but it's in hadoop-common, which is not visible in some 
> cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-18 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289915#comment-15289915
 ] 

Xiaowei Zhu commented on HDFS-10188:


What does sizeof(clazz) return? Will it always be the size of memory pointed by 
ptr? 

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch, 
> HDFS-10188.HDFS-8707.003.patch, HDFS-10188.HDFS-8707.004.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-18 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289908#comment-15289908
 ] 

Xiaobing Zhou commented on HDFS-10390:
--

I filed HDFS-10430 to separately address copy-paste code pieces in 
TestAsyncDFSRename#checkAccessPermissions.

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch, HDFS-10390-HDFS-9924.004.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10430) Refactor FileSystem#checkAccessPermissions for better reuse from tests

2016-05-18 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-10430:


 Summary: Refactor FileSystem#checkAccessPermissions for better 
reuse from tests
 Key: HDFS-10430
 URL: https://issues.apache.org/jira/browse/HDFS-10430
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Reporter: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289905#comment-15289905
 ] 

Hudson commented on HDFS-2173:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9818 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9818/])
HDFS-2173. saveNamespace should not throw IOE when only one storage (wang: rev 
010e6ac328855bad59f138b6aeaec535272f448c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java


> saveNamespace should not throw IOE when only one storage directory fails to 
> write VERSION file
> --
>
> Key: HDFS-2173
> URL: https://issues.apache.org/jira/browse/HDFS-2173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073), 0.23.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
> Fix For: 2.9.0
>
> Attachments: HDFS-2173.01.patch, HDFS-2173.02.patch, 
> HDFS-2173.02.patch, HDFS-2173.03.patch, HDFS-2173.04.patch
>
>
> This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing 
> the VERSION files in the storage directories, one of the directories fails, 
> the entire operation throws IOE. This is unnecessary -- instead, just that 
> directory should be marked as failed.
> This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
> does not ever dataloss, and would rarely occur in practice (the dir would 
> have to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-18 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289891#comment-15289891
 ] 

Xiaobing Zhou commented on HDFS-10390:
--

v004 patch:
1. removed LOG.info
2. removed incrementWriteOps from getAclStatus in AsyncDistributedFileSystem
3. added storageStatistics to the corresponding functions in 
AsyncDistributedFileSystem
4. removed testAsyncAPIWithException refactoring by keeping it to original 
position (e.g. TestAsyncDFSRename)
5. fixed some check style issues.

Thank you for review!

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch, HDFS-10390-HDFS-9924.004.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10390:
-
Attachment: HDFS-10390-HDFS-9924.004.patch

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch, HDFS-10390-HDFS-9924.004.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-18 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289852#comment-15289852
 ] 

Andras Bokor commented on HDFS-2173:


Thanks a lot [~andrew.wang].

> saveNamespace should not throw IOE when only one storage directory fails to 
> write VERSION file
> --
>
> Key: HDFS-2173
> URL: https://issues.apache.org/jira/browse/HDFS-2173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073), 0.23.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
> Fix For: 2.9.0
>
> Attachments: HDFS-2173.01.patch, HDFS-2173.02.patch, 
> HDFS-2173.02.patch, HDFS-2173.03.patch, HDFS-2173.04.patch
>
>
> This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing 
> the VERSION files in the storage directories, one of the directories fails, 
> the entire operation throws IOE. This is unnecessary -- instead, just that 
> directory should be marked as failed.
> This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
> does not ever dataloss, and would rarely occur in practice (the dir would 
> have to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-18 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-2173:
--
   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks for fixing this long-standing issue Andras! Committed to trunk and 
branch-2.

> saveNamespace should not throw IOE when only one storage directory fails to 
> write VERSION file
> --
>
> Key: HDFS-2173
> URL: https://issues.apache.org/jira/browse/HDFS-2173
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Edit log branch (HDFS-1073), 0.23.0
>Reporter: Todd Lipcon
>Assignee: Andras Bokor
> Fix For: 2.9.0
>
> Attachments: HDFS-2173.01.patch, HDFS-2173.02.patch, 
> HDFS-2173.02.patch, HDFS-2173.03.patch, HDFS-2173.04.patch
>
>
> This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing 
> the VERSION files in the storage directories, one of the directories fails, 
> the entire operation throws IOE. This is unnecessary -- instead, just that 
> directory should be marked as failed.
> This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
> does not ever dataloss, and would rarely occur in practice (the dir would 
> have to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2016-05-18 Thread Zhiyuan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiyuan Yang updated HDFS-10429:

Attachment: HDFS-10429.1.patch

> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
>Priority: Minor
> Attachments: HDFS-10429.1.patch
>
>
> Every time I use 'hdfs dfs -put' upload file, this warning is printed:
> {code:java}
> 16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Thread.join(Thread.java:1245)
>   at java.lang.Thread.join(Thread.java:1319)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
>   at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
> {code}
> The reason is this: originally, DataStreamer::closeResponder always prints a 
> warning about InterruptedException; since HDFS-9812, 
> DFSOutputStream::closeImpl  always forces threads to close, which causes 
> InterruptedException.
> A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2016-05-18 Thread Zhiyuan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiyuan Yang reassigned HDFS-10429:
---

Assignee: Zhiyuan Yang

> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
>Priority: Minor
> Attachments: HDFS-10429.1.patch
>
>
> Every time I use 'hdfs dfs -put' upload file, this warning is printed:
> {code:java}
> 16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Thread.join(Thread.java:1245)
>   at java.lang.Thread.join(Thread.java:1319)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
>   at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
> {code}
> The reason is this: originally, DataStreamer::closeResponder always prints a 
> warning about InterruptedException; since HDFS-9812, 
> DFSOutputStream::closeImpl  always forces threads to close, which causes 
> InterruptedException.
> A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2016-05-18 Thread Zhiyuan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhiyuan Yang updated HDFS-10429:

Status: Patch Available  (was: Open)

> DataStreamer interrupted warning  always appears when using CLI upload file
> ---
>
> Key: HDFS-10429
> URL: https://issues.apache.org/jira/browse/HDFS-10429
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
>Priority: Minor
> Attachments: HDFS-10429.1.patch
>
>
> Every time I use 'hdfs dfs -put' upload file, this warning is printed:
> {code:java}
> 16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
> java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Thread.join(Thread.java:1245)
>   at java.lang.Thread.join(Thread.java:1319)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
>   at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
> {code}
> The reason is this: originally, DataStreamer::closeResponder always prints a 
> warning about InterruptedException; since HDFS-9812, 
> DFSOutputStream::closeImpl  always forces threads to close, which causes 
> InterruptedException.
> A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10429) DataStreamer interrupted warning always appears when using CLI upload file

2016-05-18 Thread Zhiyuan Yang (JIRA)
Zhiyuan Yang created HDFS-10429:
---

 Summary: DataStreamer interrupted warning  always appears when 
using CLI upload file
 Key: HDFS-10429
 URL: https://issues.apache.org/jira/browse/HDFS-10429
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Zhiyuan Yang
Priority: Minor


Every time I use 'hdfs dfs -put' upload file, this warning is printed:
{code:java}
16/05/18 20:57:56 WARN hdfs.DataStreamer: Caught exception
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1245)
at java.lang.Thread.join(Thread.java:1319)
at 
org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:871)
at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:519)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:696)
{code}

The reason is this: originally, DataStreamer::closeResponder always prints a 
warning about InterruptedException; since HDFS-9812, DFSOutputStream::closeImpl 
 always forces threads to close, which causes InterruptedException.

A simple fix is to use debug level log instead of warning level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3296) Running libhdfs tests in mac fails

2016-05-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289800#comment-15289800
 ] 

Allen Wittenauer commented on HDFS-3296:


Awesome thanks.

yeah, totally understand about the libhdfs issue. :)

> Running libhdfs tests in mac fails
> --
>
> Key: HDFS-3296
> URL: https://issues.apache.org/jira/browse/HDFS-3296
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Amareshwari Sriramadasu
>Assignee: Chris Nauroth
> Attachments: HDFS-3296.001.patch, HDFS-3296.002.patch, 
> HDFS-3296.003.patch
>
>
> Running "ant -Dcompile.c++=true -Dlibhdfs=true test-c++-libhdfs" on Mac fails 
> with following error:
> {noformat}
>  [exec] dyld: lazy symbol binding failed: Symbol not found: 
> _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] dyld: Symbol not found: _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] 
> /Users/amareshwari.sr/workspace/hadoop/src/c++/libhdfs/tests/test-libhdfs.sh: 
> line 122: 39485 Trace/BPT trap: 5   CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH 
> LD_PRELOAD="$LIB_JVM_DIR/libjvm.so:$LIBHDFS_INSTALL_DIR/libhdfs.so:" 
> $LIBHDFS_BUILD_DIR/$HDFS_TEST
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-3296) Running libhdfs tests in mac fails

2016-05-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-3296:

Attachment: HDFS-3296.003.patch

[~aw], I have filed HADOOP-13177 with a one-line patch for the Surefire 
configuration change to set {{DYLD_LIBRARY_PATH}}.  We can commit that one to 
move ahead with Jenkins runs on OS X.

I'm also attaching a rebased v003 patch here for just the change in 
hadoop-hdfs-native-client.  We won't want to commit this one, because this will 
just make {{test_libhdfs_zerocopy_hdfs_static}} hang indefinitely.  We need to 
get to the bottom of the domain socket issues on OS X before we can commit this 
one.

> Running libhdfs tests in mac fails
> --
>
> Key: HDFS-3296
> URL: https://issues.apache.org/jira/browse/HDFS-3296
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Amareshwari Sriramadasu
>Assignee: Chris Nauroth
> Attachments: HDFS-3296.001.patch, HDFS-3296.002.patch, 
> HDFS-3296.003.patch
>
>
> Running "ant -Dcompile.c++=true -Dlibhdfs=true test-c++-libhdfs" on Mac fails 
> with following error:
> {noformat}
>  [exec] dyld: lazy symbol binding failed: Symbol not found: 
> _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] dyld: Symbol not found: _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] 
> /Users/amareshwari.sr/workspace/hadoop/src/c++/libhdfs/tests/test-libhdfs.sh: 
> line 122: 39485 Trace/BPT trap: 5   CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH 
> LD_PRELOAD="$LIB_JVM_DIR/libjvm.so:$LIBHDFS_INSTALL_DIR/libhdfs.so:" 
> $LIBHDFS_BUILD_DIR/$HDFS_TEST
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-3296) Running libhdfs tests in mac fails

2016-05-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-3296:

Status: Open  (was: Patch Available)

> Running libhdfs tests in mac fails
> --
>
> Key: HDFS-3296
> URL: https://issues.apache.org/jira/browse/HDFS-3296
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Amareshwari Sriramadasu
>Assignee: Chris Nauroth
> Attachments: HDFS-3296.001.patch, HDFS-3296.002.patch, 
> HDFS-3296.003.patch
>
>
> Running "ant -Dcompile.c++=true -Dlibhdfs=true test-c++-libhdfs" on Mac fails 
> with following error:
> {noformat}
>  [exec] dyld: lazy symbol binding failed: Symbol not found: 
> _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] dyld: Symbol not found: _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] 
> /Users/amareshwari.sr/workspace/hadoop/src/c++/libhdfs/tests/test-libhdfs.sh: 
> line 122: 39485 Trace/BPT trap: 5   CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH 
> LD_PRELOAD="$LIB_JVM_DIR/libjvm.so:$LIBHDFS_INSTALL_DIR/libhdfs.so:" 
> $LIBHDFS_BUILD_DIR/$HDFS_TEST
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289769#comment-15289769
 ] 

Colin Patrick McCabe commented on HDFS-10415:
-

bq. As Steve Loughran's concern, if the stats has nothing to do with this unit 
test, we can consider avoiding it. I'm more favor of this approach.

Sure.  Thanks for the explanation.

bq. there's another option, you know. Do the stats init in the constructor 
rather than initialize. There is no information used in setting up 
DFSClient.storageStatistics, its only ever written to once. Move it to the 
constructor and make final and maybe this problem will go away (maybe, mocks 
are a PITA)

It seems like this would prevent us from using the Configuration object in the 
future when creating stats, right?  I think we should keep this flexibility.

This whole problem arises because the FileSystem constructor doesn't require a 
Configuration and it should, which leads to the "construct then initialize" 
idiom.  If it just took a Configuration in the first place we could initialize 
everything in the constructor.  grumble grumble

> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
> Environment: jenkins
>Reporter: Sangjin Lee
>Assignee: Mingliang Liu
> Attachments: HDFS-10415-branch-2.000.patch, 
> HDFS-10415-branch-2.001.patch, HDFS-10415.000.patch
>
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-18 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289732#comment-15289732
 ] 

Mingliang Liu commented on HDFS-10383:
--

Thanks [~xyao] for your review and commit. Thanks [~walter.k.su] and 
[~arpitagarwal] for the review and discussion.

> Safely close resources in DFSTestUtil
> -
>
> Key: HDFS-10383
> URL: https://issues.apache.org/jira/browse/HDFS-10383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-10383-branch-2.000.patch, HDFS-10383.000.patch, 
> HDFS-10383.001.patch, HDFS-10383.002.patch, HDFS-10383.003.patch
>
>
> There are a few of methods in {{DFSTestUtil}} that do not close the resource 
> safely, or elegantly. We can use the try-with-resource statement to address 
> this problem.
> Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
> any exceptions thrown during the processing of the resource while still 
> guaranteeing it's closed finally. Take for example,the current implementation 
> of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
> {{finally}} block, and when closing if the internal 
> {{DFSOutputStream#close()}} throws any exception, which it often does, the 
> exception thrown during the processing will be lost. See this [test 
> failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
>  and we have to guess what was the root cause.
> Using try-with-resource, we can close the resources safely, and the 
> exceptions thrown both in processing and closing will be available (closing 
> exception will be suppressed). Besides the try-with-resource, if a stream is 
> not necessary, don't create/close it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-18 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10188:
---
Attachment: HDFS-10188.HDFS-8707.004.patch

New patch looks good, I went to fix a typo where it was doing something like
{code}
  memset(ptr, 0, sizeof(ptr))
{code}
and tried to change to
{code}
  memset(ptr, 0, sizeof(decltype(this)))
{code}

I forgot that member new/delete needed to be static, and because of that they 
don't get to see 'this', so sizeof(decltype(this)) doesn't compile.  Sorry for 
the bad advice there.

The quick fix was to pass in the class name as a macro argument; not ideal but 
not terrible.  The API would look like
{code}
class UsedAfterFreeSometimes {
  MEMCHECKED_CLASS(UsedAfterFreeSometimes)
}
{code}
Unfortunately it looks like getting an implicit typename isn't really possible 
in static methods short of even more macro tricks.

Xiaowei, could you take a look at these changes and see if they are still in 
line with what you did?

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch, 
> HDFS-10188.HDFS-8707.003.patch, HDFS-10188.HDFS-8707.004.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3296) Running libhdfs tests in mac fails

2016-05-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289662#comment-15289662
 ] 

Allen Wittenauer commented on HDFS-3296:


bq. The patch here won't help with any compilation problems. 

Correct.  I'm well past that point and now trying to get unit tests to work.  
native unit tests are failing in hadoop-common because they can't find 
libhadoop.so.  I ended up writing pretty much the exact same change to 
hadoop-project/pom.xml (since replaced with what you have here) and am manually 
patching the pom prior to launching maven on the build host.  We're now left 
with a bunch of other problems, including this likely related one:

(from 
https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-trunk-osx-java8/14/console
 )

{code}
testInvalidOperations(org.apache.hadoop.net.unix.TestDomainSocket)  Time 
elapsed: 0.012 sec  <<< FAILURE!
java.lang.AssertionError: Expected to find 'connect(2) error: ' but got 
unexpected exception:java.net.SocketException: error computing UNIX domain 
socket path: path too long.  The longest UNIX domain socket path possible on 
this host is 103 bytes.
at org.apache.hadoop.net.unix.DomainSocket.connect0(Native Method)
at 
org.apache.hadoop.net.unix.DomainSocket.connect(DomainSocket.java:256)
at 
org.apache.hadoop.net.unix.TestDomainSocket.testInvalidOperations(TestDomainSocket.java:266)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

at org.apache.hadoop.net.unix.DomainSocket.connect0(Native Method)
at 
org.apache.hadoop.net.unix.DomainSocket.connect(DomainSocket.java:256)
at 
org.apache.hadoop.net.unix.TestDomainSocket.testInvalidOperations(TestDomainSocket.java:266)
{code}

I think I'm going to set up a precommit job for Mac so that folks can manually 
trigger it to test Mac-specific patches.

bq. target the Mac OS X 10.10 SDK

The Mac mini in the build infrastructure is a 10.9 box.




> Running libhdfs tests in mac fails
> --
>
> Key: HDFS-3296
> URL: https://issues.apache.org/jira/browse/HDFS-3296
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Amareshwari Sriramadasu
>Assignee: Chris Nauroth
> Attachments: HDFS-3296.001.patch, HDFS-3296.002.patch
>
>
> Running "ant -Dcompile.c++=true -Dlibhdfs=true test-c++-libhdfs" on Mac fails 
> with following error:
> {noformat}
>  [exec] dyld: lazy symbol binding failed: Symbol not found: 
> _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] dyld: Symbol not found: _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] 
> /Users/amareshwari.sr/workspace/hadoop/src/c++/libhdfs/tests/test-libhdfs.sh: 
> line 122: 39485 Trace/BPT trap: 5   CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH 
> LD_PRELOAD="$LIB_JVM_DIR/libjvm.so:$LIBHDFS_INSTALL_DIR/libhdfs.so:" 
> $LIBHDFS_BUILD_DIR/$HDFS_TEST
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10428) Kms server UNAUTHENTICATED

2016-05-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDFS-10428.
---
Resolution: Not A Problem

> Kms server UNAUTHENTICATED
> --
>
> Key: HDFS-10428
> URL: https://issues.apache.org/jira/browse/HDFS-10428
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: lushuai
>
> UNAUTHENTICATED RemoteHost:${ip} Method:OPTIONS 
> URL:http://kms-server/kms/v1/?op=GETDELEGATIONTOKEN=yarn 
> ErrorMsg:'Authentication required'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10428) Kms server UNAUTHENTICATED

2016-05-18 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289582#comment-15289582
 ] 

Xiaoyu Yao commented on HDFS-10428:
---

[~lushuai], the log message in the description is benign and is part of SPNEGO. 
I will resolve this as not a problem and feel free to reopen if you think 
differently. 

> Kms server UNAUTHENTICATED
> --
>
> Key: HDFS-10428
> URL: https://issues.apache.org/jira/browse/HDFS-10428
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: lushuai
>
> UNAUTHENTICATED RemoteHost:${ip} Method:OPTIONS 
> URL:http://kms-server/kms/v1/?op=GETDELEGATIONTOKEN=yarn 
> ErrorMsg:'Authentication required'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3296) Running libhdfs tests in mac fails

2016-05-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289618#comment-15289618
 ] 

Chris Nauroth commented on HDFS-3296:
-

Hi [~aw].  Thanks for setting up a nightly on OS X!  Even just a basic build to 
catch compilation errors is a huge help.

The patch here won't help with any compilation problems.  This patch was just a 
small step towards fixing the libhdfs tests on Mac by setting up 
{{DYLD_LIBRARY_PATH}} with the right shared library dependencies.  It isn't 
sufficient though, because we still have a compatibility problem around domain 
socket usage.  This manifests as test failures in {{TestDomainSocketWatcher}} 
and unfortunately some of the libhdfs tests just hang.

If the immediate goal is a basic build on OS X, then what I'm currently seeing 
on trunk is a compilation error in the container executor.  This was introduced 
by patch YARN-4594, and I commented on the situation here:

https://issues.apache.org/jira/browse/YARN-4594?focusedCommentId=15139679=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15139679

To summarize that discussion, if I can get the build environment to target the 
Mac OS X 10.10 SDK, then I suspect it would work.  I wasn't able to follow up 
on it though.  I'm curious if you have any thoughts on this.

> Running libhdfs tests in mac fails
> --
>
> Key: HDFS-3296
> URL: https://issues.apache.org/jira/browse/HDFS-3296
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Amareshwari Sriramadasu
>Assignee: Chris Nauroth
> Attachments: HDFS-3296.001.patch, HDFS-3296.002.patch
>
>
> Running "ant -Dcompile.c++=true -Dlibhdfs=true test-c++-libhdfs" on Mac fails 
> with following error:
> {noformat}
>  [exec] dyld: lazy symbol binding failed: Symbol not found: 
> _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] dyld: Symbol not found: _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] 
> /Users/amareshwari.sr/workspace/hadoop/src/c++/libhdfs/tests/test-libhdfs.sh: 
> line 122: 39485 Trace/BPT trap: 5   CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH 
> LD_PRELOAD="$LIB_JVM_DIR/libjvm.so:$LIBHDFS_INSTALL_DIR/libhdfs.so:" 
> $LIBHDFS_BUILD_DIR/$HDFS_TEST
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-18 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10383:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s:   (was: )
  Status: Resolved  (was: Patch Available)

Thanks [~liuml07] for the branch-2.8 patch. 
I've committed this to branch-2 and branch-2.8.

> Safely close resources in DFSTestUtil
> -
>
> Key: HDFS-10383
> URL: https://issues.apache.org/jira/browse/HDFS-10383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-10383-branch-2.000.patch, HDFS-10383.000.patch, 
> HDFS-10383.001.patch, HDFS-10383.002.patch, HDFS-10383.003.patch
>
>
> There are a few of methods in {{DFSTestUtil}} that do not close the resource 
> safely, or elegantly. We can use the try-with-resource statement to address 
> this problem.
> Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
> any exceptions thrown during the processing of the resource while still 
> guaranteeing it's closed finally. Take for example,the current implementation 
> of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
> {{finally}} block, and when closing if the internal 
> {{DFSOutputStream#close()}} throws any exception, which it often does, the 
> exception thrown during the processing will be lost. See this [test 
> failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
>  and we have to guess what was the root cause.
> Using try-with-resource, we can close the resources safely, and the 
> exceptions thrown both in processing and closing will be available (closing 
> exception will be suppressed). Besides the try-with-resource, if a stream is 
> not necessary, don't create/close it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3296) Running libhdfs tests in mac fails

2016-05-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289561#comment-15289561
 ] 

Allen Wittenauer commented on HDFS-3296:


I'm in the process of setting up a nightly build for OS X on the ASF build 
infrastructure.  We definitely need this patch just for hadoop-common to build. 
 So let's get this rebased and committed.  [~cnauroth], if you want, you or I 
can open another JIRA to do that with if you want.

> Running libhdfs tests in mac fails
> --
>
> Key: HDFS-3296
> URL: https://issues.apache.org/jira/browse/HDFS-3296
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Amareshwari Sriramadasu
>Assignee: Chris Nauroth
> Attachments: HDFS-3296.001.patch, HDFS-3296.002.patch
>
>
> Running "ant -Dcompile.c++=true -Dlibhdfs=true test-c++-libhdfs" on Mac fails 
> with following error:
> {noformat}
>  [exec] dyld: lazy symbol binding failed: Symbol not found: 
> _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] dyld: Symbol not found: _JNI_GetCreatedJavaVMs
>  [exec]   Referenced from: 
> /Users/amareshwari.sr/workspace/hadoop/build/c++/Mac_OS_X-x86_64-64/lib/libhdfs.0.dylib
>  [exec]   Expected in: flat namespace
>  [exec] 
>  [exec] 
> /Users/amareshwari.sr/workspace/hadoop/src/c++/libhdfs/tests/test-libhdfs.sh: 
> line 122: 39485 Trace/BPT trap: 5   CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH 
> LD_PRELOAD="$LIB_JVM_DIR/libjvm.so:$LIBHDFS_INSTALL_DIR/libhdfs.so:" 
> $LIBHDFS_BUILD_DIR/$HDFS_TEST
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289563#comment-15289563
 ] 

Steve Loughran commented on HDFS-10415:
---

there's another option, you know. Do the stats init in the constructor rather 
than initialize. There is no information used in setting up 
{{DFSClient.storageStatistics}}, its only ever written to once. Move it to the 
constructor and make final and maybe this problem will go away (maybe, mocks 
are a PITA)

> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
> Environment: jenkins
>Reporter: Sangjin Lee
>Assignee: Mingliang Liu
> Attachments: HDFS-10415-branch-2.000.patch, 
> HDFS-10415-branch-2.001.patch, HDFS-10415.000.patch
>
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-18 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289540#comment-15289540
 ] 

Mingliang Liu commented on HDFS-10415:
--

Hi [~cmccabe],

Thanks for the comment. Your proposal is a bit similar to my solution 2 (see 
the comment above).

The problem is that, though it's indeed more nature fix, the {{initialize()}} 
method is never called after {{MyDistributedFileSystem}} is constructed. This 
is because, the object is not created by the static factory method 
{{FileSystem#get()}}. As a result, justing implementing the {{initialize()}} is 
not able to work. We have to call it.
As to implementation,
# the {{dfs}} object is mocked in the {{MyDistributedFileSystem}} constructor. 
The {{DistributedFileSystem#initialize()}} method will reset this value, which 
is expected generally. We need to mock the {{dfs}} in the {{initialize()}} 
after calling {{super.initialize()}}
# {{super.initialize()}} will take care of the {{statistics}} and 
{{storageStatistics}} so we don't need to create it explicitly after that I 
believe?

As [~ste...@apache.org]'s concern, if the stats has nothing to do with this 
unit test, we can consider avoiding it. I'm more favor of this approach.

My concern is that, why the {{trunk}} and {{branch-2}} have the code difference 
as following? If the javadoc is true, it should stand for both of the branches. 
I must have missed something?
{code}
+// Symlink resolution doesn't work with a mock, since it doesn't
+// have a valid Configuration to resolve paths to the right FileSystem.
+// Just call the DFSClient directly to register the delete
+@Override
+public boolean delete(Path f, final boolean recursive) throws IOException {
+  return dfs.delete(f.toUri().getPath(), recursive);
+}
{code}

> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
> Environment: jenkins
>Reporter: Sangjin Lee
>Assignee: Mingliang Liu
> Attachments: HDFS-10415-branch-2.000.patch, 
> HDFS-10415-branch-2.001.patch, HDFS-10415.000.patch
>
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10382) In WebHDFS numeric usernames do not work with DataNode

2016-05-18 Thread ramtin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289536#comment-15289536
 ] 

ramtin commented on HDFS-10382:
---

Thanks [~aw] for your comment. I think you are right but this patch just tries 
to fix HDFS-4983 problem in reading domain pattern from the configuration and 
even this problem can happen for non-numeric usernames. 


> In WebHDFS numeric usernames do not work with DataNode
> --
>
> Key: HDFS-10382
> URL: https://issues.apache.org/jira/browse/HDFS-10382
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-10382.patch
>
>
> Operations like {code:java}curl -i 
> -L"http://:/webhdfs/v1/?user.name=0123=OPEN"{code} that 
> directed to DataNode fail because of not reading the suggested domain pattern 
> from the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-18 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289537#comment-15289537
 ] 

Tsz Wo Nicholas Sze commented on HDFS-10390:


- Need to remove the new LOG.info message from Client.
- getAclStatus should not incrementWriteOps
- storageStatistics was recently added to DistributedFileSystem.  We should 
also add it to AsyncDistributedFileSystem.
- Please don't refactor the test.  It is harder to review the new test.  We may 
do it in a separated JIRA, if necessary.

Thanks.


> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data

2016-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289479#comment-15289479
 ] 

Hadoop QA commented on HDFS-9833:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-hdfs-project: patch generated 17 new + 104 
unchanged - 0 fixed = 121 total (was 104) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 32s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 52s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 21s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  org.apache.hadoop.hdfs.protocolPB.PBHelperClient.convert(byte[]) invokes 
inefficient new Integer(int) constructor; use Integer.valueOf(int) instead  At 
PBHelperClient.java:constructor; use Integer.valueOf(int) instead  At 
PBHelperClient.java:[line 868] |
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.TestAsyncDFSRename |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804697/HDFS-9833-01.patch |
| JIRA Issue | HDFS-9833 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 763ddb0c750f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build 

[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2016-05-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289441#comment-15289441
 ] 

Arpit Agarwal commented on HDFS-9226:
-

Also thanks to [~vinayrpet] and [~iwasakims] for the earlier reviews and 
diagnosis.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.8.0
>
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch, 
> HDFS-9926.006.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> 

[jira] [Commented] (HDFS-10428) Kms server UNAUTHENTICATED

2016-05-18 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289438#comment-15289438
 ] 

Rushabh S Shah commented on HDFS-10428:
---

[~lushuai]: can you please give more context ?

> Kms server UNAUTHENTICATED
> --
>
> Key: HDFS-10428
> URL: https://issues.apache.org/jira/browse/HDFS-10428
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: lushuai
>
> UNAUTHENTICATED RemoteHost:${ip} Method:OPTIONS 
> URL:http://kms-server/kms/v1/?op=GETDELEGATIONTOKEN=yarn 
> ErrorMsg:'Authentication required'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10428) Kms server UNAUTHENTICATED

2016-05-18 Thread lushuai (JIRA)
lushuai created HDFS-10428:
--

 Summary: Kms server UNAUTHENTICATED
 Key: HDFS-10428
 URL: https://issues.apache.org/jira/browse/HDFS-10428
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.1
Reporter: lushuai


UNAUTHENTICATED RemoteHost:${ip} Method:OPTIONS 
URL:http://kms-server/kms/v1/?op=GETDELEGATIONTOKEN=yarn 
ErrorMsg:'Authentication required'




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10360) DataNode may format directory and lose blocks if current/VERSION is missing

2016-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289342#comment-15289342
 ] 

Hudson commented on HDFS-10360:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9814 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9814/])
HDFS-10360. DataNode may format directory and lose blocks if (lei: rev 
cf552aa87b4c47f0c73f51f44f3bc1d267c524cf)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataStorage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java


> DataNode may format directory and lose blocks if current/VERSION is missing
> ---
>
> Key: HDFS-10360
> URL: https://issues.apache.org/jira/browse/HDFS-10360
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: dataloss, datanode
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-10360.001.patch, HDFS-10360.002.patch, 
> HDFS-10360.003.patch, HDFS-10360.004.patch, HDFS-10360.004.patch, 
> HDFS-10360.005.patch, HDFS-10360.007.patch
>
>
> Under certain circumstances, if the current/VERSION of a storage directory is 
> missing, DataNode may format the storage directory even though _block files 
> are not missing_.
> This is very easy to reproduce. Simply launch a HDFS cluster and create some 
> files. Delete current/VERSION, and restart the data node.
> After the restart, the data node will format the directory and remove all 
> existing block files:
> {noformat}
> 2016-05-03 12:57:15,387 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on /data/dfs/dn/in_use.lock acquired by nodename 
> 5...@weichiu-dn-2.vpc.cloudera.com
> 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Storage directory /data/dfs/dn is not formatted for 
> BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting ...
> 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Analyzing storage directories for bpid BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Locking is disabled for 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Block pool storage directory 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642 is not formatted 
> for BP-787466439-172
> .26.24.43-1462305406642
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting ...
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-787466439-172.26.24.43-1462305406642 directory 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642/current
> {noformat}
> The bug is: DataNode assumes that if none of {{current/VERSION}}, 
> {{previous/}}, {{previous.tmp/}}, {{removed.tmp/}}, {{finalized.tmp/}} and 
> {{lastcheckpoint.tmp/}} exists, the storage directory contains nothing 
> important to HDFS and decides to format it. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java#L526-L545
> However, block files may still exist, and in my opinion, we should do 
> everything possible to retain the block files.
> I have two suggestions:
> # check if {{current/}} directory is empty. If not, throw an 
> InconsistentFSStateException in {{Storage#analyzeStorage}} instead of 
> asumming its not formatted. Or,
> # In {{Storage#clearDirectory}}, before it formats the storage directory, 
> rename or move {{current/}} directory. Also, log whatever is being 
> renamed/moved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289312#comment-15289312
 ] 

Colin Patrick McCabe commented on HDFS-10415:
-

Thanks for looking at this.

So basically the problem is that we're attempting to do something in the 
constructor of our {{DistributedFileSystem}} subclass that requires that the FS 
already be initialized.  Why not just override the {{initialize}} method with 
something like:

{code}
public void initialize() {
  super.initialize();
  statistics = new FileSystem.Statistics("myhdfs"); // can't mock finals
}
{code}

That seems like the most natural fix since it's not doing "weird stuff" that we 
don't do outside unit tests.

I don't feel strongly about this, though, any of the solutions proposed here 
seems like it would work.

> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
> Environment: jenkins
>Reporter: Sangjin Lee
>Assignee: Mingliang Liu
> Attachments: HDFS-10415-branch-2.000.patch, 
> HDFS-10415-branch-2.001.patch, HDFS-10415.000.patch
>
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9926) ozone : Add volume commands to CLI

2016-05-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289298#comment-15289298
 ] 

Hudson commented on HDFS-9926:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9813 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9813/])
HDFS-9926. MiniDFSCluster leaks dependency Mockito via (arp: rev 
f4d8fde8224a7154965239932a18dd563fb60f23)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeadDatanode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplication.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestTriggerBlockReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStorageReport.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeReconfiguration.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMetricsLogger.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/BlockReportTestBase.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDnRespectsBlockReportSplitThreshold.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeleteRace.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHeartbeatHandling.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestOverReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestIncrementalBlockReports.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeHotSwapVolumes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCorruption.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/InternalDataNodeTestUtils.java


> ozone : Add volume commands to CLI
> --
>
> Key: HDFS-9926
> URL: https://issues.apache.org/jira/browse/HDFS-9926
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-7240
>
> Attachments: HDFS-9926-HDFS-7240.001.patch, 
> HDFS-9926-HDFS-7240.002.patch, HDFS-9926-HDFS-7240.003.patch
>
>
> Adds a cli tool which supports volume commands



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10360) DataNode may format directory and lose blocks if current/VERSION is missing

2016-05-18 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289277#comment-15289277
 ] 

Wei-Chiu Chuang commented on HDFS-10360:


Thanks very much to Eddy for reviewing and committing this patch!

> DataNode may format directory and lose blocks if current/VERSION is missing
> ---
>
> Key: HDFS-10360
> URL: https://issues.apache.org/jira/browse/HDFS-10360
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: dataloss, datanode
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-10360.001.patch, HDFS-10360.002.patch, 
> HDFS-10360.003.patch, HDFS-10360.004.patch, HDFS-10360.004.patch, 
> HDFS-10360.005.patch, HDFS-10360.007.patch
>
>
> Under certain circumstances, if the current/VERSION of a storage directory is 
> missing, DataNode may format the storage directory even though _block files 
> are not missing_.
> This is very easy to reproduce. Simply launch a HDFS cluster and create some 
> files. Delete current/VERSION, and restart the data node.
> After the restart, the data node will format the directory and remove all 
> existing block files:
> {noformat}
> 2016-05-03 12:57:15,387 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on /data/dfs/dn/in_use.lock acquired by nodename 
> 5...@weichiu-dn-2.vpc.cloudera.com
> 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Storage directory /data/dfs/dn is not formatted for 
> BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting ...
> 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Analyzing storage directories for bpid BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Locking is disabled for 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Block pool storage directory 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642 is not formatted 
> for BP-787466439-172
> .26.24.43-1462305406642
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting ...
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-787466439-172.26.24.43-1462305406642 directory 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642/current
> {noformat}
> The bug is: DataNode assumes that if none of {{current/VERSION}}, 
> {{previous/}}, {{previous.tmp/}}, {{removed.tmp/}}, {{finalized.tmp/}} and 
> {{lastcheckpoint.tmp/}} exists, the storage directory contains nothing 
> important to HDFS and decides to format it. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java#L526-L545
> However, block files may still exist, and in my opinion, we should do 
> everything possible to retain the block files.
> I have two suggestions:
> # check if {{current/}} directory is empty. If not, throw an 
> InconsistentFSStateException in {{Storage#analyzeStorage}} instead of 
> asumming its not formatted. Or,
> # In {{Storage#clearDirectory}}, before it formats the storage directory, 
> rename or move {{current/}} directory. Also, log whatever is being 
> renamed/moved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data

2016-05-18 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-9833:
---
Target Version/s: 3.0.0-alpha1
  Status: Patch Available  (was: Open)

> Erasure coding: recomputing block checksum on the fly by reconstructing the 
> missed/corrupt block data
> -
>
> Key: HDFS-9833
> URL: https://issues.apache.org/jira/browse/HDFS-9833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-9833-00-draft.patch, HDFS-9833-01.patch
>
>
> As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum 
> even some of striped blocks are missed, we need to consider recomputing block 
> checksum on the fly for the missed/corrupt blocks. To recompute the block 
> checksum, the block data needs to be reconstructed by erasure decoding, and 
> the main needed codes for the block reconstruction could be borrowed from 
> HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC 
> worker, reconstructed blocks need to be written out to target datanodes, but 
> here in this case, the remote writing isn't necessary, as the reconstructed 
> block data is only used to recompute the checksum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data

2016-05-18 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-9833:
---
Attachment: HDFS-9833-01.patch

> Erasure coding: recomputing block checksum on the fly by reconstructing the 
> missed/corrupt block data
> -
>
> Key: HDFS-9833
> URL: https://issues.apache.org/jira/browse/HDFS-9833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-9833-00-draft.patch, HDFS-9833-01.patch
>
>
> As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum 
> even some of striped blocks are missed, we need to consider recomputing block 
> checksum on the fly for the missed/corrupt blocks. To recompute the block 
> checksum, the block data needs to be reconstructed by erasure decoding, and 
> the main needed codes for the block reconstruction could be borrowed from 
> HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC 
> worker, reconstructed blocks need to be written out to target datanodes, but 
> here in this case, the remote writing isn't necessary, as the reconstructed 
> block data is only used to recompute the checksum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2016-05-18 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289263#comment-15289263
 ] 

Xiao Chen commented on HDFS-8829:
-

Thanks Colin. I just feel it weird to have method non-public in the interface, 
but public in subclasses. (Maybe we can make the subclass non-public then. But 
no big deal :))

I agree we can leave it as is, and relax it to public if there's a need in the 
future.

> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10423) Increase default value of httpfs maxHttpHeaderSize

2016-05-18 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289244#comment-15289244
 ] 

Xiaoyu Yao edited comment on HDFS-10423 at 5/18/16 4:30 PM:


[~npopa], thanks for reporting the issue. This can be done similar to 
HADOOP-12764 for kms. There are some problem assigning the JIRA to you. 
Below is the error message I got: "User 'Nicolae Popa' does not exist." 
There are some JIRA permission issues lately. Feel free to post your patch and 
we will fix permission issue separately.


was (Author: xyao):
[~npopa], thanks for reporting the issue. This can be done similar to 
HADOOP-12764 for kms. I've assigned the JIRA to you. 

> Increase default value of httpfs maxHttpHeaderSize
> --
>
> Key: HDFS-10423
> URL: https://issues.apache.org/jira/browse/HDFS-10423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.4
>Reporter: Nicolae Popa
>Priority: Minor
>
> The Tomcat default value of maxHttpHeaderSize is 8k, which is too low for 
> certain Hadoop workloads in kerberos enabled environments. This JIRA will to 
> change it to 65536 in server.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2016-05-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289251#comment-15289251
 ] 

Josh Elser commented on HDFS-9226:
--

bq. Test failures were not caused by the patch (verified all tests passed for 
me locally with the v06 patch applied). The checkstyle warnings were not 
introduced by your patch either.

Oh, ok.

bq. Thank you for taking care of this Josh Elser.

Any time. Thanks for the commit.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.8.0
>
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch, 
> HDFS-9926.006.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 

[jira] [Updated] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2016-05-18 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9226:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk through branch-2.8.

Thank you for taking care of this [~elserj].

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.8.0
>
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch, 
> HDFS-9926.006.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

[jira] [Commented] (HDFS-10423) Increase default value of httpfs maxHttpHeaderSize

2016-05-18 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289244#comment-15289244
 ] 

Xiaoyu Yao commented on HDFS-10423:
---

[~npopa], thanks for reporting the issue. This can be done similar to 
HADOOP-12764 for kms. I've assigned the JIRA to you. 

> Increase default value of httpfs maxHttpHeaderSize
> --
>
> Key: HDFS-10423
> URL: https://issues.apache.org/jira/browse/HDFS-10423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.4
>Reporter: Nicolae Popa
>Priority: Minor
>
> The Tomcat default value of maxHttpHeaderSize is 8k, which is too low for 
> certain Hadoop workloads in kerberos enabled environments. This JIRA will to 
> change it to 65536 in server.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2016-05-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289239#comment-15289239
 ] 

Colin Patrick McCabe commented on HDFS-8829:


There was no need to make it public because it's only used by unit tests.  Is 
there a reason why it should be public?

> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2016-05-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289233#comment-15289233
 ] 

Arpit Agarwal edited comment on HDFS-9226 at 5/18/16 4:20 PM:
--

Test failures were not caused by the patch (verified all tests passed for me 
locally with the v06 patch applied). The checkstyle warnings were not 
introduced by your patch either.

I will commit this shortly.


was (Author: arpitagarwal):
Test failures were not caused by the patch. The checkstyle warnings were not 
introduced by your patch either.

I will commit this shortly.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch, 
> HDFS-9926.006.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at 

[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2016-05-18 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289233#comment-15289233
 ] 

Arpit Agarwal commented on HDFS-9226:
-

Test failures were not caused by the patch. The checkstyle warnings were not 
introduced by your patch either.

I will commit this shortly.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch, 
> HDFS-9926.006.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>  

[jira] [Commented] (HDFS-9890) libhdfs++: Add test suite to simulate network issues

2016-05-18 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289219#comment-15289219
 ] 

James Clampffer commented on HDFS-9890:
---

Has anyone been able to reproduce this error?  I've reviewed a couple times now 
and can't find anything in the patch that looks like it could trigger this sort 
of error.  I think it's more likely that the patch is exposing a real bug that 
should be tracked in another JIRA.  

I'm going to spend a few more hours debugging on different machines with 
more/less cores and some different architectures but if nothing shows up I'm 
inclined to +1 and deal with the underlying library error once someone can find 
a reproducer.  The value of having this landed to help prevent regressions 
before HA and Kerberos are done outweighs bugs that emerge occasionally when 
it's run IMO.

> libhdfs++: Add test suite to simulate network issues
> 
>
> Key: HDFS-9890
> URL: https://issues.apache.org/jira/browse/HDFS-9890
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-9890.HDFS-8707.000.patch, 
> HDFS-9890.HDFS-8707.001.patch, HDFS-9890.HDFS-8707.002.patch, 
> HDFS-9890.HDFS-8707.003.patch, HDFS-9890.HDFS-8707.004.patch, 
> HDFS-9890.HDFS-8707.005.patch, HDFS-9890.HDFS-8707.006.patch, 
> HDFS-9890.HDFS-8707.007.patch, hs_err_pid26832.log, hs_err_pid4944.log
>
>
> I propose adding a test suite to simulate various network issues/failures in 
> order to get good test coverage on some of the retry paths that aren't easy 
> to hit in mock unit tests.
> At the moment the only things that hit the retry paths are the gmock unit 
> tests.  The gmock are only as good as their mock implementations which do a 
> great job of simulating protocol correctness but not more complex 
> interactions.  They also can't really simulate the types of lock contention 
> and subtle memory stomps that show up while doing hundreds or thousands of 
> concurrent reads.   We should add a new minidfscluster test that focuses on 
> heavy read/seek load and then randomly convert error codes returned by 
> network functions into errors.
> List of things to simulate(while heavily loaded), roughly in order of how 
> badly I think they need to be tested at the moment:
> -Rpc connection disconnect
> -Rpc connection slowed down enough to cause a timeout and trigger retry
> -DN connection disconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5395) write(ByteBuffer) method

2016-05-18 Thread Jorge Veiga Fachal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289206#comment-15289206
 ] 

Jorge Veiga Fachal commented on HDFS-5395:
--

Is there any news about this issue? I am trying to write off-heap bytebuffers 
without buffering to a byte array every time. I don't know if there is any 
workaround that could do the job here.

> write(ByteBuffer) method
> 
>
> Key: HDFS-5395
> URL: https://issues.apache.org/jira/browse/HDFS-5395
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.2.0
>Reporter: john lilley
>Priority: Minor
>
> It would be great to have a write(ByteBuffer) API in FSDataOutputStream, so 
> that JNI callers could perform a write without making an extra copy.  The 
> complementary read(ByteBuffer) call exists in FSDataInputStream, so this 
> would seem to be an omission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9226) MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils

2016-05-18 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289179#comment-15289179
 ] 

Josh Elser commented on HDFS-9226:
--

Hrm, let me take a look at these unit test failures and checkstyle. I can fix 
these.

> MiniDFSCluster leaks dependency Mockito via DataNodeTestUtils
> -
>
> Key: HDFS-9226
> URL: https://issues.apache.org/jira/browse/HDFS-9226
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HDFS-9226.001.patch, HDFS-9226.002.patch, 
> HDFS-9226.003.patch, HDFS-9226.004.patch, HDFS-9226.005.patch, 
> HDFS-9926.006.patch
>
>
> Noticed a test failure when attempting to run Accumulo unit tests against 
> 2.8.0-SNAPSHOT:
> {noformat}
> java.lang.NoClassDefFoundError: org/mockito/stubbing/Answer
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.mockito.stubbing.Answer
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.shouldWait(MiniDFSCluster.java:2421)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2323)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2367)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1529)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438)
>   at 
> org.apache.accumulo.start.test.AccumuloDFSBase.miniDfsClusterSetup(AccumuloDFSBase.java:67)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> 

[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Attachment: HDFS-10425.01.patch

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Status: Patch Available  (was: Open)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Attachments: HDFS-10425.01.patch
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10426) TestPendingInvalidateBlock failed in trunk

2016-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289029#comment-15289029
 ] 

Hadoop QA commented on HDFS-10426:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 32s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 14s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.TestCrcCorruption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804641/HDFS-10426.001.patch |
| JIRA Issue | HDFS-10426 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 871e0a0fd7f7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8a9ecb7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15482/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15482/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15482/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15482/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> TestPendingInvalidateBlock failed in trunk
> --
>
> Key: HDFS-10426
> URL: https://issues.apache.org/jira/browse/HDFS-10426
> Project: Hadoop 

[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Affects Version/s: (was: 0.23.0)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Priority: Trivial  (was: Major)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Target Version/s:   (was: 2.8.0)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Fix Version/s: (was: 0.23.0)

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-18 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10417:
--
Status: Patch Available  (was: Reopened)

> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10417.000.patch, HDFS-10417.001.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10427) Write and Read SequenceFile Parallelly - java.io.IOException: Cannot obtain block length for LocatedBlock

2016-05-18 Thread Syed Akram (JIRA)
Syed Akram created HDFS-10427:
-

 Summary: Write and Read SequenceFile Parallelly - 
java.io.IOException: Cannot obtain block length for LocatedBlock
 Key: HDFS-10427
 URL: https://issues.apache.org/jira/browse/HDFS-10427
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 2.7.2
Reporter: Syed Akram
Priority: Blocker


Trying to Write a key/value and Read already written key/value in a 
SequenceFile parallelly, But while doing that 

Writer - appendOption true

java.io.IOException: Cannot obtain block length for 
LocatedBlock{BP-1019538077-localhost-1459944245378:blk_1075356142_3219260; 
getBlockSize()=2409; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[dn1:50010,DS-21698924-4178-4c08-ba41-aa86770ef0d0,DISK],
 
DatanodeInfoWithStorage[dn3:50010,DS-8e3dc8c0-4e34-4d12-86a3-48b189b78f5d,DISK],
 
DatanodeInfoWithStorage[dn2:50010,DS-fb22c1c2-e059-4e0e-91e0-df838beb86f9,DISK]]}
at 
org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:428)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:336)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:272)
at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:264)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1526)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:303)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:299)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:299)
at 
org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1902)
at 
org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1822)


But when i'm trying to read when write (SequenceFile.Writer )is opened, it 
works fine, 

But when we do parallelly (both start write with append=true and then read 
already existing key/value) then facing this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10426) TestPendingInvalidateBlock failed in trunk

2016-05-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10426:
-
Status: Patch Available  (was: Open)

Attach a simple patch for this.

> TestPendingInvalidateBlock failed in trunk
> --
>
> Key: HDFS-10426
> URL: https://issues.apache.org/jira/browse/HDFS-10426
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10426.001.patch
>
>
> The test {{TestPendingInvalidateBlock}} failed sometimes. The stack info:
> {code}
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
> testPendingDeletion(org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock)
>   Time elapsed: 7.703 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeletion(TestPendingInvalidateBlock.java:92)
> {code}
> It looks that the {{invalidateBlock}} has been removed before we do the check
> {code}
> // restart NN
> cluster.restartNameNode(true);
> dfs.delete(foo, true);
> Assert.assertEquals(0, cluster.getNamesystem().getBlocksTotal());
> Assert.assertEquals(REPLICATION, cluster.getNamesystem()
> .getPendingDeletionBlocks());
> Assert.assertEquals(REPLICATION,
> dfs.getPendingDeletionBlocksCount());
> {code}
> And I look into the related configurations. I found the property 
> {{dfs.namenode.replication.interval}} was just set as 1 second in this test. 
> And after the delay time of {{dfs.namenode.startup.delay.block.deletion.sec}} 
> and the delete operation was slowly, it will cause this case. We can see the 
> stack info before, the failed test costs 7.7s more than 5+1 second.
> One way can improve this.
> * Increase the time of {{dfs.namenode.replication.interval}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10426) TestPendingInvalidateBlock failed in trunk

2016-05-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10426:
-
Attachment: HDFS-10426.001.patch

> TestPendingInvalidateBlock failed in trunk
> --
>
> Key: HDFS-10426
> URL: https://issues.apache.org/jira/browse/HDFS-10426
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10426.001.patch
>
>
> The test {{TestPendingInvalidateBlock}} failed sometimes. The stack info:
> {code}
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
> testPendingDeletion(org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock)
>   Time elapsed: 7.703 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeletion(TestPendingInvalidateBlock.java:92)
> {code}
> It looks that the {{invalidateBlock}} has been removed before we do the check
> {code}
> // restart NN
> cluster.restartNameNode(true);
> dfs.delete(foo, true);
> Assert.assertEquals(0, cluster.getNamesystem().getBlocksTotal());
> Assert.assertEquals(REPLICATION, cluster.getNamesystem()
> .getPendingDeletionBlocks());
> Assert.assertEquals(REPLICATION,
> dfs.getPendingDeletionBlocksCount());
> {code}
> And I look into the related configurations. I found the property 
> {{dfs.namenode.replication.interval}} was just set as 1 second in this test. 
> And after the delay time of {{dfs.namenode.startup.delay.block.deletion.sec}} 
> and the delete operation was slowly, it will cause this case. We can see the 
> stack info before, the failed test costs 7.7s more than 5+1 second.
> One way can improve this.
> * Increase the time of {{dfs.namenode.replication.interval}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10363) Ozone: Introduce new config keys for SCM service endpoints

2016-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288871#comment-15288871
 ] 

Hadoop QA commented on HDFS-10363:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
57s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 179 unchanged - 1 fixed = 179 total (was 180) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 41s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 48s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 151m 7s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.ozone.web.client.TestBuckets |
|   | hadoop.ozone.web.TestOzoneVolumes |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.web.TestOzoneWebAccess |
|   | hadoop.hdfs.TestHFlush |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
| 

[jira] [Updated] (HDFS-10426) TestPendingInvalidateBlock failed in trunk

2016-05-18 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10426:
-
Description: 
The test {{TestPendingInvalidateBlock}} failed sometimes. The stack info:
{code}
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
testPendingDeletion(org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock)
  Time elapsed: 7.703 sec  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeletion(TestPendingInvalidateBlock.java:92)
{code}
It looks that the {{invalidateBlock}} has been removed before we do the check
{code}
// restart NN
cluster.restartNameNode(true);
dfs.delete(foo, true);
Assert.assertEquals(0, cluster.getNamesystem().getBlocksTotal());
Assert.assertEquals(REPLICATION, cluster.getNamesystem()
.getPendingDeletionBlocks());
Assert.assertEquals(REPLICATION,
dfs.getPendingDeletionBlocksCount());
{code}
And I look into the related configurations. I found the property 
{{dfs.namenode.replication.interval}} was just set as 1 second in this test. 
And after the delay time of {{dfs.namenode.startup.delay.block.deletion.sec}} 
and the delete operation was slowly, it will cause this case. We can see the 
stack info before, the failed test costs 7.7s more than 5+1 second.

One way can improve this.

* Increase the time of {{dfs.namenode.replication.interval}}

  was:
The test {{TestPendingInvalidateBlock}} failed sometimes. The stack info:
{code}
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
testPendingDeletion(org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock)
  Time elapsed: 7.703 sec  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeletion(TestPendingInvalidateBlock.java:92)
{code}
It looks that the {{invalidateBlock}} has been removed before we do the check
{code}
// restart NN
cluster.restartNameNode(true);
dfs.delete(foo, true);
Assert.assertEquals(0, cluster.getNamesystem().getBlocksTotal());
Assert.assertEquals(REPLICATION, cluster.getNamesystem()
.getPendingDeletionBlocks());
Assert.assertEquals(REPLICATION,
dfs.getPendingDeletionBlocksCount());
{code}
And I look into the related configurations. I found the property 
{{dfs.namenode.replication.interval}} was just set as 1 second in this test. 
And after the delay time of {{dfs.namenode.startup.delay.block.deletion.sec}} 
and the delete operation was slowly, it will cause this case. We can see the 
stack info before, the failed test costs 7.7s more than 5+1 second.

Two methods will improve this.

* Increase the time of {{dfs.namenode.startup.delay.block.deletion.sec}}
* Increase the time of {{dfs.namenode.replication.interval}}


> TestPendingInvalidateBlock failed in trunk
> --
>
> Key: HDFS-10426
> URL: https://issues.apache.org/jira/browse/HDFS-10426
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>
> The test {{TestPendingInvalidateBlock}} failed sometimes. The stack info:
> {code}
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
> testPendingDeletion(org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock)
>   Time elapsed: 7.703 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeletion(TestPendingInvalidateBlock.java:92)
> {code}
> It looks that the {{invalidateBlock}} has been removed before we do the check
> {code}
> // restart NN
> cluster.restartNameNode(true);
> dfs.delete(foo, true);
> Assert.assertEquals(0, cluster.getNamesystem().getBlocksTotal());
> 

[jira] [Created] (HDFS-10426) TestPendingInvalidateBlock failed in trunk

2016-05-18 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-10426:


 Summary: TestPendingInvalidateBlock failed in trunk
 Key: HDFS-10426
 URL: https://issues.apache.org/jira/browse/HDFS-10426
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Yiqun Lin
Assignee: Yiqun Lin


The test {{TestPendingInvalidateBlock}} failed sometimes. The stack info:
{code}
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
testPendingDeletion(org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock)
  Time elapsed: 7.703 sec  <<< FAILURE!
java.lang.AssertionError: expected:<2> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeletion(TestPendingInvalidateBlock.java:92)
{code}
It looks that the {{invalidateBlock}} has been removed before we do the check
{code}
// restart NN
cluster.restartNameNode(true);
dfs.delete(foo, true);
Assert.assertEquals(0, cluster.getNamesystem().getBlocksTotal());
Assert.assertEquals(REPLICATION, cluster.getNamesystem()
.getPendingDeletionBlocks());
Assert.assertEquals(REPLICATION,
dfs.getPendingDeletionBlocksCount());
{code}
And I look into the related configurations. I found the property 
{{dfs.namenode.replication.interval}} was just set as 1 second in this test. 
And after the delay time of {{dfs.namenode.startup.delay.block.deletion.sec}} 
and the delete operation was slowly, it will cause this case. We can see the 
stack info before, the failed test costs 7.7s more than 5+1 second.

Two methods will improve this.

* Increase the time of {{dfs.namenode.startup.delay.block.deletion.sec}}
* Increase the time of {{dfs.namenode.replication.interval}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Description: Since I was working with NNStorage and TestSaveNamespace 
classes it is good time take care with IDE and checkstyle warnings.  (was: 
Since I )

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 0.23.0
>
>
> Since I was working with NNStorage and TestSaveNamespace classes it is good 
> time take care with IDE and checkstyle warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-10425:

Description: Since I   (was: This JIRA tracks a TODO in TestSaveNamespace. 
Currently, if, while writing the VERSION files in the storage directories, one 
of the directories fails, the entire operation throws IOE. This is unnecessary 
-- instead, just that directory should be marked as failed.

This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
does not ever dataloss, and would rarely occur in practice (the dir would have 
to fail between writing the fsimage file and writing VERSION))

> Clean up NNStorage and TestSaveNamespace
> 
>
> Key: HDFS-10425
> URL: https://issues.apache.org/jira/browse/HDFS-10425
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 0.23.0
>
>
> Since I 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10425) Clean up NNStorage and TestSaveNamespace

2016-05-18 Thread Andras Bokor (JIRA)
Andras Bokor created HDFS-10425:
---

 Summary: Clean up NNStorage and TestSaveNamespace
 Key: HDFS-10425
 URL: https://issues.apache.org/jira/browse/HDFS-10425
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 0.23.0
Reporter: Andras Bokor
Assignee: Andras Bokor


This JIRA tracks a TODO in TestSaveNamespace. Currently, if, while writing the 
VERSION files in the storage directories, one of the directories fails, the 
entire operation throws IOE. This is unnecessary -- instead, just that 
directory should be marked as failed.

This is targeted to be fixed _after_ HDFS-1073 is merged to trunk, since it 
does not ever dataloss, and would rarely occur in practice (the dir would have 
to fail between writing the fsimage file and writing VERSION)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10424) DatanodeLifelineProtocol not able to use under security cluster

2016-05-18 Thread gu-chi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288853#comment-15288853
 ] 

gu-chi commented on HDFS-10424:
---

[~cnauroth] please help check, thx

> DatanodeLifelineProtocol not able to use under security cluster
> ---
>
> Key: HDFS-10424
> URL: https://issues.apache.org/jira/browse/HDFS-10424
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: gu-chi
>Priority: Blocker
>
> {quote}
> protocol org.apache.hadoop.hdfs.server.protocol.DatanodeLifelineProtocol is 
> unauthorized for user * (auth:KERBEROS) | Server.java:1979
> {quote}
> I am using security cluster authenticate with kerberos, as I checked the the 
> code, if security auth enabled, because the DatanodeLifelineProtocol is not 
> inside HDFSPolicyProvider, when authorize in ServiceAuthorizationManager, 
> AuthorizationException will be thrown at line 96.
> Please point me out if I am wrong



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10424) DatanodeLifelineProtocol not able to use under security cluster

2016-05-18 Thread gu-chi (JIRA)
gu-chi created HDFS-10424:
-

 Summary: DatanodeLifelineProtocol not able to use under security 
cluster
 Key: HDFS-10424
 URL: https://issues.apache.org/jira/browse/HDFS-10424
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: gu-chi
Priority: Blocker


{quote}
protocol org.apache.hadoop.hdfs.server.protocol.DatanodeLifelineProtocol is 
unauthorized for user * (auth:KERBEROS) | Server.java:1979
{quote}

I am using security cluster authenticate with kerberos, as I checked the the 
code, if security auth enabled, because the DatanodeLifelineProtocol is not 
inside HDFSPolicyProvider, when authorize in ServiceAuthorizationManager, 
AuthorizationException will be thrown at line 96.

Please point me out if I am wrong




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288840#comment-15288840
 ] 

Hadoop QA commented on HDFS-10236:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 428 unchanged - 1 fixed = 428 total (was 429) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 72m 4s 
{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 3s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804628/HDFS-10236-01.patch |
| JIRA Issue | HDFS-10236 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 83886f18b79d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8a9ecb7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15481/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15481/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Erasure Coding: Rename replication-based names in BlockManager to more 
> generic [part-3]
> ---
>
> Key: HDFS-10236
> URL: https://issues.apache.org/jira/browse/HDFS-10236
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: 

  1   2   >