[jira] [Commented] (HDFS-9368) Implement reads with implicit offset state in libhdfs++

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002748#comment-15002748
 ] 

Hadoop QA commented on HDFS-9368:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
24s {color} | {color:green} HDFS-8707 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 11s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 11s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 13s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 13s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 11s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 12s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 425 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 8m 9s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-12 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12772028/HDFS-9368.HDFS-8707.001.patch
 |
| JIRA Issue | HDFS-9368 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux c235ed658471 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-fa12328/precommit/personality/hadoop.sh
 |
| git revision | HDFS-8707 / fbba870 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13491/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_60.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13491/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_79.txt
 |
| compile | 

[jira] [Updated] (HDFS-9410) TestDFSAdminWithHA#tearDown may throw NPE

2015-11-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9410:

Attachment: HDFS-9410.003.patch

> TestDFSAdminWithHA#tearDown may throw NPE
> -
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> If for any reason {{setUpHaCluster}} fails before storing {{System.out}} and 
> {{System.err}} as member variables {{originOut}} and {{originErr}}, then in 
> {{tearDown}} {{System.out}} and {{System.err}} are set to null. This could 
> cause all following tests to fail when calling {{flush}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) Erasure coding: a comprehensive I/O throughput benchmark tool

2015-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002756#comment-15002756
 ] 

Hudson commented on HDFS-8968:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8799 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8799/])
HDFS-8968. Erasure coding: a comprehensive I/O throughput benchmark (zhz: rev 
7b00c8e20ee62885097c5e63f110b9eece8ce6b3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ErasureCodeBenchmarkThroughput.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodeBenchmarkThroughput.java


> Erasure coding: a comprehensive I/O throughput benchmark tool
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9410:

Description: 
Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
throws an NPE. 
The cause is that if for any reason {{setUpHaCluster}} fails before storing 
{{System.out}} and {{System.err}} as member variables {{originOut}} and 
{{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
to null. This could cause all following tests to fail when calling {{flush}}.

This jira tries to fix all similar occurrences of this issue

  was:If for any reason {{setUpHaCluster}} fails before storing {{System.out}} 
and {{System.err}} as member variables {{originOut}} and {{originErr}}, then in 
{{tearDown}} {{System.out}} and {{System.err}} are set to null. This could 
cause all following tests to fail when calling {{flush}}.


> Fix all HDFS unit tests to correctly reset to sysout and syserr
> ---
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9410:

Status: Open  (was: Patch Available)

> Fix all HDFS unit tests to correctly reset to sysout and syserr
> ---
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002916#comment-15002916
 ] 

Xiao Chen commented on HDFS-9410:
-

Hi [~walter.k.su], here comes patch 003 which fixes all similar errors, 
including the 4 you mentioned above.
I found this because {{TestDFSAdminWithHA}} failed for me, but your suggestion 
of fixing them altogether makes sense. Safer tests are favorable after all. :)
I also updated the title and description of this JIRA.

> Fix all HDFS unit tests to correctly reset to sysout and syserr
> ---
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7984) webhdfs:// needs to support provided delegation tokens

2015-11-12 Thread HeeSoo Kim (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HeeSoo Kim updated HDFS-7984:
-
Attachment: HDFS-7984.007.patch

Add hadoop.token.files property in core-site.xml. In addition, it checks 
whether the token files are existing or not.
It also changes RENEWDELEGATIONTOKEN to use delegationParam only when the job 
has a credential information. If the job does not have credential, It still 
uses SPNEGO connection to get the right credential.

> webhdfs:// needs to support provided delegation tokens
> --
>
> Key: HDFS-7984
> URL: https://issues.apache.org/jira/browse/HDFS-7984
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: HeeSoo Kim
>Priority: Blocker
> Attachments: HDFS-7984.001.patch, HDFS-7984.002.patch, 
> HDFS-7984.003.patch, HDFS-7984.004.patch, HDFS-7984.005.patch, 
> HDFS-7984.006.patch, HDFS-7984.007.patch, HDFS-7984.patch
>
>
> When using the webhdfs:// filesystem (especially from distcp), we need the 
> ability to inject a delegation token rather than webhdfs initialize its own.  
> This would allow for cross-authentication-zone file system accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9420) DiskBalancer : Add DataModels

2015-11-12 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-9420:
--

 Summary: DiskBalancer : Add DataModels
 Key: HDFS-9420
 URL: https://issues.apache.org/jira/browse/HDFS-9420
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: HDFS
Affects Versions: 2.8.0
Reporter: Anu Engineer
Assignee: Anu Engineer


Adds Data Model classes needed for Disk Balancer.  These classes allow us to 
persist the physical model of the cluster. This is needed by the other parts of 
Disk balancer like Planner and Executor engine



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) Erasure coding: a comprehensive I/O throughput benchmark tool

2015-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003004#comment-15003004
 ] 

Hudson commented on HDFS-8968:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #675 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/675/])
HDFS-8968. Erasure coding: a comprehensive I/O throughput benchmark (zhz: rev 
7b00c8e20ee62885097c5e63f110b9eece8ce6b3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ErasureCodeBenchmarkThroughput.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodeBenchmarkThroughput.java


> Erasure coding: a comprehensive I/O throughput benchmark tool
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9419) Add Optional class to libhdfs++

2015-11-12 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9419:
-
Attachment: HDFS-9419.HDFS-8707.002.patch

Rename TR2->tr2 per [~wheat9]'s request

> Add Optional class to libhdfs++
> ---
>
> Key: HDFS-9419
> URL: https://issues.apache.org/jira/browse/HDFS-9419
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9419.HDFS-8707.002.patch, HDFS-9419.HDFS-8707.diff
>
>
> Copy from 
> https://raw.githubusercontent.com/akrzemi1/Optional/master/optional.hpp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002873#comment-15002873
 ] 

Hadoop QA commented on HDFS-7163:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
16s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
46s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 101, now 102). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 2s 
{color} | {color:red} The patch has 2114 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 56s 
{color} | {color:red} The patch has 95 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 52s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 26s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 47m 44s 
{color} | {color:red} Patch generated 72 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 205m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | 

[jira] [Commented] (HDFS-9358) TestNodeCount#testNodeCount timed out

2015-11-12 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002872#comment-15002872
 ] 

Wei-Chiu Chuang commented on HDFS-9358:
---

[~iwasakims] Thanks for the patch.
I looked at the patch, and what it does is follows:
After NN detects DN is down, wait until the excess replica is invalidated, 
before restarting the stopped DN again.

After the DN is restarted, make sure the excessive replica is detected.

So the process is deterministic and will always go like (granted no timeout)
{noformat}
(live, excess): (3, 1) -> (3, 0) -> (2, 1)
{noformat}

I don't have the committership, but looks good to me. I ran the patched test 
and it did not fail in 100 runs.

> TestNodeCount#testNodeCount timed out
> -
>
> Key: HDFS-9358
> URL: https://issues.apache.org/jira/browse/HDFS-9358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
> Attachments: HDFS-9358.001.patch
>
>
> I have seen this test failure occurred a few times in trunk:
> Error Message
> Timeout: excess replica count not equal to 2 for block blk_1073741825_1001 
> after 2 msec.  Last counts: live = 2, excess = 0, corrupt = 0
> Stacktrace
> java.util.concurrent.TimeoutException: Timeout: excess replica count not 
> equal to 2 for block blk_1073741825_1001 after 2 msec.  Last counts: live 
> = 2, excess = 0, corrupt = 0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:152)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:146)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.__CLR4_0_39bdgm666uf(TestNodeCount.java:130)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:54)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-11-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002991#comment-15002991
 ] 

Haohui Mai commented on HDFS-9117:
--

Thanks for splitting it up.

{code}
+bool Configuration::AddResourceStream(std::istream & stream)
+{
+  stream.seekg(0,std::ios::end);
+  std::streampos length = stream.tellg();
+  stream.seekg(0,std::ios::beg);
...
{code}

It's not streaming per se but reads the whole stream. In the first cut maybe we 
can remove it from the API and keep the version that only takes string?

{code}
...
+  public:
+// Constructs a configuration with no search path and no resources loaded
+Configuration();
+
+// clears out search path and all loaded values
+void Clear();
+
+// Loads resources from a file or a stream
+bool AddResourceStream(std::istream & stream);
+bool AddResourceString(const std::string & stream);
{code}

One messy problem we have in the Java side is to implement thread safety for 
the configuration class. I suggest making Configuration as an immutable object, 
by tweaking the APIs to the following to eliminate the problem of thread safety 
by design:

{code}
/**
 * A Configuration object is an immutable object that holds the configuration 
values specified from the XML.
 * ...
 **/
class Configuration {
public:
  static Configuration *Load(const std::string );
  // This can be implement in a separate jira.
  static Configuration *LoadFromFile(...);
  // Immutable object. No add / clear methods.
};

{code}
+std::string GetWithDefault(const std::string & key, 
const std::string & defaultValue);
+optional   Get   (const std::string & key);
+int64_t GetIntWithDefault (const std::string & key, 
int64_t defaultValue);
+optional   GetInt(const std::string & key)
...
{code}

Instead of having calls for every types, I suggest adopting the APIs of boost's 
property tree, which exposes a single {{get}} method with a template argument:

{code}
template
optional get(const string& key) const;
{code}

That gives a minimal exposure from the API prospective.

{code}

+// Transparent data holder for property values
+struct ConfigData {
+  std::string value;
+  boolfinal;
+  ConfigData() : final(false) {};
+  ConfigData(const std::string ) : value(value), final(false) {}
+  void operator = (const std::string & newValue) {value = newValue; final 
= false;}
+};
+std::map raw_values;

{code}

It's preferable to move the details of on implementing {{final}} in the {{.cc}} 
file.

{code}
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/configuration_test.h
{code}

Do you want to merge the {{.h}} file with the {{.cc}} file?

There are several issues on styling w.r.t. HDFS-9328, but it's okay to fix it 
in the next iteration. 


> Config file reader / options classes for libhdfs++
> --
>
> Key: HDFS-9117
> URL: https://issues.apache.org/jira/browse/HDFS-9117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch, 
> HDFS-9117.HDFS-8707.004.patch, HDFS-9117.HDFS-8707.005.patch, 
> HDFS-9117.HDFS-8707.006.patch, HDFS-9117.HDFS-8707.008.patch, 
> HDFS-9117.HDFS-8707.009.patch, HDFS-9117.HDFS-8707.010.patch, 
> HDFS-9117.HDFS-8707.011.patch, HDFS-9117.HDFS-8707.012.patch, 
> HDFS-9117.HDFS-8707.013.patch, HDFS-9117.HDFS-8707.014.patch, 
> HDFS-9117.HDFS-9288.007.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9144) Refactor libhdfs into stateful/ephemeral objects

2015-11-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003043#comment-15003043
 ] 

Haohui Mai commented on HDFS-9144:
--

A pretty large patch. Can you explain what you're trying to achieve here? Some 
more questions:

* What is the purpose of having multiple inheritances on {{FileHanlde}}, 
{{FileHandleImpl}}, {{CancelHandle}}, {{CancelHandleImpl}}?
* Why having multi-inheritance?
* What is the principle of life cycle management here? Why changing pointers to 
shared_ptr in {{RemoteBlockReader}}? It's deliberately left as normal pointer 
as continuations do not own the stream, but the {{State}} object own it. What 
is the purpose of  copying the callback handlers?
* What is cancelable? Is a request cancelable or a stream cancelable? Why 
{{AsyncStream}} inherits {{Cancelable}}.

The patch contains many unnecessary changes and can be separated into several 
patches. There are refactors in NN, DN, various changes from here and there. It 
needs to be separated, reviewed and applied independently.



> Refactor libhdfs into stateful/ephemeral objects
> 
>
> Key: HDFS-9144
> URL: https://issues.apache.org/jira/browse/HDFS-9144
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9144.HDFS-8707.001.patch, 
> HDFS-9144.HDFS-8707.002.patch
>
>
> In discussion for other efforts, we decided that we should separate several 
> concerns:
> * A posix-like FileSystem/FileHandle object (stream-based, positional reads)
> * An ephemeral ReadOperation object that holds the state for 
> reads-in-progress, which consumes
> * An immutable FileInfo object which holds the block map and file size (and 
> other metadata about the file that we assume will not change over the life of 
> the file)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9410:

Summary: Fix all HDFS unit tests to correctly reset to sysout and syserr  
(was: TestDFSAdminWithHA#tearDown may throw NPE)

> Fix all HDFS unit tests to correctly reset to sysout and syserr
> ---
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> If for any reason {{setUpHaCluster}} fails before storing {{System.out}} and 
> {{System.err}} as member variables {{originOut}} and {{originErr}}, then in 
> {{tearDown}} {{System.out}} and {{System.err}} are set to null. This could 
> cause all following tests to fail when calling {{flush}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) Erasure coding: a comprehensive I/O throughput benchmark tool

2015-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002981#comment-15002981
 ] 

Hudson commented on HDFS-8968:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1399 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1399/])
HDFS-8968. Erasure coding: a comprehensive I/O throughput benchmark (zhz: rev 
7b00c8e20ee62885097c5e63f110b9eece8ce6b3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ErasureCodeBenchmarkThroughput.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodeBenchmarkThroughput.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Erasure coding: a comprehensive I/O throughput benchmark tool
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9410:

Status: Patch Available  (was: Open)

> Fix all HDFS unit tests to correctly reset to sysout and syserr
> ---
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9419) Add Optional class to libhdfs++

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003071#comment-15003071
 ] 

Hadoop QA commented on HDFS-9419:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
44s {color} | {color:green} HDFS-8707 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 11s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 11s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 12s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 12s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 55 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch has 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 12s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 13s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 426 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 7s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-12 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12772056/HDFS-9419.HDFS-8707.002.patch
 |
| JIRA Issue | HDFS-9419 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  cc  |
| 

[jira] [Commented] (HDFS-9387) Parse namenodeUri parameter only once in NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()

2015-11-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003288#comment-15003288
 ] 

Xiaoyu Yao commented on HDFS-9387:
--

bq. Is it possible to run the -namenode arguments for replication test? Hm... 
if it's not, the -namenode and -op all conflict with each other.

[~liuml07], that's my thought too. I suggest we either fix it or block it as 
unsupported via HDFS-9421. cc: [~clamb] for additional comments.

> Parse namenodeUri parameter only once in 
> NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()
> 
>
> Key: HDFS-9387
> URL: https://issues.apache.org/jira/browse/HDFS-9387
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9387.000.patch
>
>
> In {{NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()}}, the   
> {{namenodeUri}} is always parsed from {{-namenode}} argument. This works just 
> fine if the {{-op}} parameter is not {{all}}, as the single benchmark will 
> need to parse the {{namenodeUri}} from args anyway.
> When the {{-op}} is {{all}}, namely all sub-benchmark will run, multiple 
> sub-benchmark will call the {{verifyOpArgument()}} method. In this case, the 
> first sub-benchmark reads the {{namenode}} argument and removes it from args. 
> The other sub-benchmarks will thereafter read {{null}} value since the 
> argument is removed. This contradicts the intension of providing {{namenode}} 
> for all sub-benchmarks.
> {code:title=current code}
>   try {
> namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
>   } catch (IllegalArgumentException iae) {
> printUsage();
>   }
> {code}
> The fix is to parse the {{namenodeUri}}, which is shared by all 
> sub-benchmarks, from {{-namenode}} argument only once. This follows the 
> convention of parsing other global arguments in 
> {{OperationStatsBase#verifyOpArgument()}}.
> {code:title=simple fix}
>   if (args.indexOf("-namenode") >= 0) {
> try {
>   namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
> } catch (IllegalArgumentException iae) {
>   printUsage();
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9387) Parse namenodeUri parameter only once in NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()

2015-11-12 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003294#comment-15003294
 ] 

Mingliang Liu commented on HDFS-9387:
-

Yes, I agree with you. Fixing it provides obvious value while I see no easy 
way. I'd block it in [HDFS-9421] if no more input from others.
{code}
- if (runAll || ReplicationStats.OP_REPLICATION_NAME.equals(type)) {
+ if ((runAll && namenodeUri == null) || 
ReplicationStats.OP_REPLICATION_NAME.equals(type)) {
{code}

> Parse namenodeUri parameter only once in 
> NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()
> 
>
> Key: HDFS-9387
> URL: https://issues.apache.org/jira/browse/HDFS-9387
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9387.000.patch
>
>
> In {{NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()}}, the   
> {{namenodeUri}} is always parsed from {{-namenode}} argument. This works just 
> fine if the {{-op}} parameter is not {{all}}, as the single benchmark will 
> need to parse the {{namenodeUri}} from args anyway.
> When the {{-op}} is {{all}}, namely all sub-benchmark will run, multiple 
> sub-benchmark will call the {{verifyOpArgument()}} method. In this case, the 
> first sub-benchmark reads the {{namenode}} argument and removes it from args. 
> The other sub-benchmarks will thereafter read {{null}} value since the 
> argument is removed. This contradicts the intension of providing {{namenode}} 
> for all sub-benchmarks.
> {code:title=current code}
>   try {
> namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
>   } catch (IllegalArgumentException iae) {
> printUsage();
>   }
> {code}
> The fix is to parse the {{namenodeUri}}, which is shared by all 
> sub-benchmarks, from {{-namenode}} argument only once. This follows the 
> convention of parsing other global arguments in 
> {{OperationStatsBase#verifyOpArgument()}}.
> {code:title=simple fix}
>   if (args.indexOf("-namenode") >= 0) {
> try {
>   namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
> } catch (IllegalArgumentException iae) {
>   printUsage();
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9408) extend the build system to produce static and dynamic libhdfspp libs

2015-11-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003087#comment-15003087
 ] 

Haohui Mai commented on HDFS-9408:
--

{code}
+IF(UNIX)
+  # linking a shared library from static ones requires --whole-archive
+  SET(LIBHDFSPP_SUBLIBS -Wl,--whole-archive ${LIBHDFSPP_SUBLIBS} 
-Wl,--no-whole-archive)
+ENDIF(UNIX)
+
{code}

Tested on MacOS X. It breaks the Mac OS build :-(

> extend the build system to produce static and dynamic libhdfspp libs
> 
>
> Key: HDFS-9408
> URL: https://issues.apache.org/jira/browse/HDFS-9408
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Stephen
>Assignee: Stephen
> Fix For: HDFS-8707
>
> Attachments: HDFS-9408.HDFS-8707.001.patch, 
> HDFS-9408.HDFS-8707.002.patch, HDFS-9408.HDFS-8707.003.patch, 
> HDFS-9408.HDFS-8707.004.patch, HDFS-9408.HDFS-8707.005.patch
>
>
> Generate static and dynamic libhdfspp libraries for use by other applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9420) DiskBalancer : Add DataModels

2015-11-12 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-9420:
---
Attachment: HDFS-9420.001.patch

> DiskBalancer : Add DataModels
> -
>
> Key: HDFS-9420
> URL: https://issues.apache.org/jira/browse/HDFS-9420
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9420.001.patch
>
>
> Adds Data Model classes needed for Disk Balancer.  These classes allow us to 
> persist the physical model of the cluster. This is needed by the other parts 
> of Disk balancer like Planner and Executor engine



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-11-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002991#comment-15002991
 ] 

Haohui Mai edited comment on HDFS-9117 at 11/12/15 10:35 PM:
-

Thanks for splitting it up.

{code}
+bool Configuration::AddResourceStream(std::istream & stream)
+{
+  stream.seekg(0,std::ios::end);
+  std::streampos length = stream.tellg();
+  stream.seekg(0,std::ios::beg);
...
{code}

It's not streaming per se but reads the whole stream. In the first cut maybe we 
can remove it from the API and keep the version that only takes string?

{code}
...
+  public:
+// Constructs a configuration with no search path and no resources loaded
+Configuration();
+
+// clears out search path and all loaded values
+void Clear();
+
+// Loads resources from a file or a stream
+bool AddResourceStream(std::istream & stream);
+bool AddResourceString(const std::string & stream);
{code}

One messy problem we have in the Java side is to implement thread safety for 
the configuration class. I suggest making Configuration as an immutable object, 
by tweaking the APIs to the following to eliminate the problem of thread safety 
by design:

{code}
/**
 * A Configuration object is an immutable object that holds the configuration 
values specified from the XML.
 * ...
 **/
class Configuration {
public:
  static Configuration *Load(const std::string );
  // This can be implement in a separate jira.
  static Configuration *LoadFromFile(...);
  // Immutable object. No add / clear methods.
};
{code}

{code}
+std::string GetWithDefault(const std::string & key, 
const std::string & defaultValue);
+optional   Get   (const std::string & key);
+int64_t GetIntWithDefault (const std::string & key, 
int64_t defaultValue);
+optional   GetInt(const std::string & key)
...
{code}

Instead of having calls for every types, I suggest adopting the APIs of boost's 
property tree, which exposes a single {{get}} method with a template argument:

{code}
template
optional get(const string& key) const;
{code}

That gives a minimal exposure from the API prospective.

{code}

+// Transparent data holder for property values
+struct ConfigData {
+  std::string value;
+  boolfinal;
+  ConfigData() : final(false) {};
+  ConfigData(const std::string ) : value(value), final(false) {}
+  void operator = (const std::string & newValue) {value = newValue; final 
= false;}
+};
+std::map raw_values;

{code}

It's preferable to move the details of on implementing {{final}} in the {{.cc}} 
file.

{code}
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/configuration_test.h
{code}

Do you want to merge the {{.h}} file with the {{.cc}} file?

There are several issues on styling w.r.t. HDFS-9328, but it's okay to fix it 
in the next iteration. 



was (Author: wheat9):
Thanks for splitting it up.

{code}
+bool Configuration::AddResourceStream(std::istream & stream)
+{
+  stream.seekg(0,std::ios::end);
+  std::streampos length = stream.tellg();
+  stream.seekg(0,std::ios::beg);
...
{code}

It's not streaming per se but reads the whole stream. In the first cut maybe we 
can remove it from the API and keep the version that only takes string?

{code}
...
+  public:
+// Constructs a configuration with no search path and no resources loaded
+Configuration();
+
+// clears out search path and all loaded values
+void Clear();
+
+// Loads resources from a file or a stream
+bool AddResourceStream(std::istream & stream);
+bool AddResourceString(const std::string & stream);
{code}

One messy problem we have in the Java side is to implement thread safety for 
the configuration class. I suggest making Configuration as an immutable object, 
by tweaking the APIs to the following to eliminate the problem of thread safety 
by design:

{code}
/**
 * A Configuration object is an immutable object that holds the configuration 
values specified from the XML.
 * ...
 **/
class Configuration {
public:
  static Configuration *Load(const std::string );
  // This can be implement in a separate jira.
  static Configuration *LoadFromFile(...);
  // Immutable object. No add / clear methods.
};

{code}
+std::string GetWithDefault(const std::string & key, 
const std::string & defaultValue);
+optional   Get   (const std::string & key);
+int64_t GetIntWithDefault (const std::string & key, 
int64_t defaultValue);
+optional   GetInt(const std::string & key)
...
{code}

Instead of having calls for every types, I suggest adopting the APIs of boost's 
property tree, which exposes a single {{get}} method with a template argument:

{code}
template
optional get(const string& 

[jira] [Commented] (HDFS-9387) Parse namenodeUri parameter only once in NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()

2015-11-12 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003235#comment-15003235
 ] 

Xiaoyu Yao commented on HDFS-9387:
--

Hit a different NPE issue during my validation of this one and filed HDFS-9421. 
[~liuml07], do you want to fix it?

> Parse namenodeUri parameter only once in 
> NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()
> 
>
> Key: HDFS-9387
> URL: https://issues.apache.org/jira/browse/HDFS-9387
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9387.000.patch
>
>
> In {{NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()}}, the   
> {{namenodeUri}} is always parsed from {{-namenode}} argument. This works just 
> fine if the {{-op}} parameter is not {{all}}, as the single benchmark will 
> need to parse the {{namenodeUri}} from args anyway.
> When the {{-op}} is {{all}}, namely all sub-benchmark will run, multiple 
> sub-benchmark will call the {{verifyOpArgument()}} method. In this case, the 
> first sub-benchmark reads the {{namenode}} argument and removes it from args. 
> The other sub-benchmarks will thereafter read {{null}} value since the 
> argument is removed. This contradicts the intension of providing {{namenode}} 
> for all sub-benchmarks.
> {code:title=current code}
>   try {
> namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
>   } catch (IllegalArgumentException iae) {
> printUsage();
>   }
> {code}
> The fix is to parse the {{namenodeUri}}, which is shared by all 
> sub-benchmarks, from {{-namenode}} argument only once. This follows the 
> convention of parsing other global arguments in 
> {{OperationStatsBase#verifyOpArgument()}}.
> {code:title=simple fix}
>   if (args.indexOf("-namenode") >= 0) {
> try {
>   namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
> } catch (IllegalArgumentException iae) {
>   printUsage();
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9421) NNThroughputBenchmark replication test NPE with -namenode option

2015-11-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9421:
-
Description: 
Hit the following NPE when reviewing fix for HDFS-9387 with manual tests as 
NNThroughputBenchmark currently does not have JUnit tests. 
 
{code}
HW11217:centos6.4 xyao$ hadoop 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -op replication 
-namenode hdfs://HW11217.local:9000
15/11/12 14:52:03 INFO namenode.NNThroughputBenchmark: Starting benchmark: 
replication
15/11/12 14:52:03 ERROR namenode.NNThroughputBenchmark: 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$ReplicationStats.generateInputs(NNThroughputBenchmark.java:1312)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:280)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1509)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1534)
...
{code}


However, the root cause is different from HDFS-9387.  
>From ReplicationStats#generateInputs, *nameNode* is uninitialized before use, 
>which causes the NPE.

{code}
  final FSNamesystem namesystem = nameNode.getNamesystem();
{code}

>From NNThroughputBenchmark#run, nameNode is only initialized when -namenode 
>option is not specified. The fix is to initialize it properly in the else 
>block when -namenode option is specified OR we should block this if it is not 
>supported.

{code}
 if (namenodeUri == null) {
nameNode = NameNode.createNameNode(argv, config);
NamenodeProtocols nnProtos = nameNode.getRpcServer();
nameNodeProto = nnProtos;
clientProto = nnProtos;
dataNodeProto = nnProtos;
refreshUserMappingsProto = nnProtos;
bpid = nameNode.getNamesystem().getBlockPoolId();
  } else {
FileSystem.setDefaultUri(getConf(), namenodeUri);
DistributedFileSystem dfs = (DistributedFileSystem)
FileSystem.get(getConf());
final URI nnUri = new URI(namenodeUri);
nameNodeProto = DFSTestUtil.getNamenodeProtocolProxy(config, nnUri,
UserGroupInformation.getCurrentUser());
 
{code}



  was:
Hit the following NPE when reviewing fix for HDFS-9387 with manual tests as 
NNThroughputBenchmark currently does not have JUnit tests. 
 
{code}
HW11217:centos6.4 xyao$ hadoop 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -op replication 
-namenode hdfs://HW11217.local:9000
15/11/12 14:52:03 INFO namenode.NNThroughputBenchmark: Starting benchmark: 
replication
15/11/12 14:52:03 ERROR namenode.NNThroughputBenchmark: 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$ReplicationStats.generateInputs(NNThroughputBenchmark.java:1312)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:280)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1509)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1534)
...
{code}


However, the root cause is different from HDFS-9387.  
>From ReplicationStats#generateInputs, *nameNode* is uninitialized before use, 
>which causes the NPE.

{code}
  final FSNamesystem namesystem = nameNode.getNamesystem();
{code}

>From NNThroughputBenchmark#run, nameNode is only initialized when -namenode 
>option is not specified. The fix is to initialize it properly in the else 
>block when -namenode option is specified.

{code}
 if (namenodeUri == null) {
nameNode = NameNode.createNameNode(argv, config);
NamenodeProtocols nnProtos = nameNode.getRpcServer();
nameNodeProto = nnProtos;
clientProto = nnProtos;
dataNodeProto = nnProtos;
refreshUserMappingsProto = nnProtos;
bpid = nameNode.getNamesystem().getBlockPoolId();
  } else {
FileSystem.setDefaultUri(getConf(), namenodeUri);
DistributedFileSystem dfs = (DistributedFileSystem)
FileSystem.get(getConf());
final URI nnUri = new URI(namenodeUri);
nameNodeProto = DFSTestUtil.getNamenodeProtocolProxy(config, nnUri,
UserGroupInformation.getCurrentUser());
 
{code}




> NNThroughputBenchmark replication test NPE with -namenode option
> 
>
> Key: HDFS-9421
> URL: 

[jira] [Commented] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003343#comment-15003343
 ] 

Walter Su commented on HDFS-9410:
-

+1 pending jenkins.

> Fix all HDFS unit tests to correctly reset to sysout and syserr
> ---
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9410:

Attachment: HDFS-9410.003.patch

> Fix all HDFS unit tests to correctly reset to sysout and syserr
> ---
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9419) Add Optional class to libhdfs++

2015-11-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003074#comment-15003074
 ] 

Haohui Mai commented on HDFS-9419:
--

+1 after fixing the whitespaces and tabs.

> Add Optional class to libhdfs++
> ---
>
> Key: HDFS-9419
> URL: https://issues.apache.org/jira/browse/HDFS-9419
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9419.HDFS-8707.002.patch, HDFS-9419.HDFS-8707.diff
>
>
> Copy from 
> https://raw.githubusercontent.com/akrzemi1/Optional/master/optional.hpp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) Erasure coding: a comprehensive I/O throughput benchmark tool

2015-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003125#comment-15003125
 ] 

Hudson commented on HDFS-8968:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2603 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2603/])
HDFS-8968. Erasure coding: a comprehensive I/O throughput benchmark (zhz: rev 
7b00c8e20ee62885097c5e63f110b9eece8ce6b3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ErasureCodeBenchmarkThroughput.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodeBenchmarkThroughput.java


> Erasure coding: a comprehensive I/O throughput benchmark tool
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) Erasure coding: a comprehensive I/O throughput benchmark tool

2015-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003149#comment-15003149
 ] 

Hudson commented on HDFS-8968:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #601 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/601/])
HDFS-8968. Erasure coding: a comprehensive I/O throughput benchmark (zhz: rev 
7b00c8e20ee62885097c5e63f110b9eece8ce6b3)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ErasureCodeBenchmarkThroughput.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodeBenchmarkThroughput.java


> Erasure coding: a comprehensive I/O throughput benchmark tool
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9239) DataNode Lifeline Protocol: an alternative protocol for reporting DataNode liveness

2015-11-12 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003219#comment-15003219
 ] 

Ming Ma commented on HDFS-9239:
---

Sorry for the jumping in late for the discussion. While we haven't seen any 
recent issues caused by DNs incorrectly marked as dead, maybe this feature 
could mitigate replication storm issue where incorrectly marked DNs will cause 
even more replication?

* It seems the introduction of a new RPC server is to work around the existing 
functionality of RPC which only support QoS based on user names. Image if RPC 
server can provide differentiated service based on method names, then we can 
just add {{sendLifeline}} to existing {{DatanodeProtocol}} and have the same 
RPC server can process the method call at the highest priority. Adding 
method-based RPC QoS could have help other use cases, for example, if we want 
to prioritize existing heartbeat over IBR.
* Regarding the DN contention scenario which blocks it from sending 
{{sendLifeline}} to NN, we could skip all info such as storage reports. But if 
DN is already such state, maybe not sending {{sendLifeline}} is what we want 
anyway.

> DataNode Lifeline Protocol: an alternative protocol for reporting DataNode 
> liveness
> ---
>
> Key: HDFS-9239
> URL: https://issues.apache.org/jira/browse/HDFS-9239
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: DataNode-Lifeline-Protocol.pdf, HDFS-9239.001.patch
>
>
> This issue proposes introduction of a new feature: the DataNode Lifeline 
> Protocol.  This is an RPC protocol that is responsible for reporting liveness 
> and basic health information about a DataNode to a NameNode.  Compared to the 
> existing heartbeat messages, it is lightweight and not prone to resource 
> contention problems that can harm accurate tracking of DataNode liveness 
> currently.  The attached design document contains more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9413) getContentSummary() on standby should throw StandbyException

2015-11-12 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003365#comment-15003365
 ] 

Ming Ma commented on HDFS-9413:
---

Thanks [~brahmareddy]. You bring up an interesting point, why getContentSummary 
is allowed on Standby NN when stale read is disabled. {{TestQuotasWithHA}} 
actually calls {{HAUtil.setAllowStandbyReads(conf, true);}} first.

* Is {{waitForLoadingFSImage}} needed given {{NameNodeRpcServer}} has checked 
via {{checkNNStartup}}?
* Maybe you can just modify {{TestQuotasWithHA}} instead for the test case.

Otherwise, it looks good.

> getContentSummary() on standby should throw StandbyException
> 
>
> Key: HDFS-9413
> URL: https://issues.apache.org/jira/browse/HDFS-9413
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-9413.patch
>
>
> Currently when we call getContentSummary() on standby it will not throw 
> StandbyException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9419) Add Optional class to libhdfs++

2015-11-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003086#comment-15003086
 ] 

Haohui Mai commented on HDFS-9419:
--

Actually since the change of whitespace is trivial I'll just fix them during 
the commit time. Committing shortly.

> Add Optional class to libhdfs++
> ---
>
> Key: HDFS-9419
> URL: https://issues.apache.org/jira/browse/HDFS-9419
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9419.HDFS-8707.002.patch, HDFS-9419.HDFS-8707.diff
>
>
> Copy from 
> https://raw.githubusercontent.com/akrzemi1/Optional/master/optional.hpp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9419) Import the optional library into libhdfs++

2015-11-12 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9419:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-8707
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~bobhansen] for the 
contribution.

> Import the optional library into libhdfs++
> --
>
> Key: HDFS-9419
> URL: https://issues.apache.org/jira/browse/HDFS-9419
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Fix For: HDFS-8707
>
> Attachments: HDFS-9419.HDFS-8707.002.patch, HDFS-9419.HDFS-8707.diff
>
>
> Copy from 
> https://raw.githubusercontent.com/akrzemi1/Optional/master/optional.hpp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) Erasure coding: a comprehensive I/O throughput benchmark tool

2015-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003102#comment-15003102
 ] 

Hudson commented on HDFS-8968:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #663 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/663/])
HDFS-8968. Erasure coding: a comprehensive I/O throughput benchmark (zhz: rev 
7b00c8e20ee62885097c5e63f110b9eece8ce6b3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodeBenchmarkThroughput.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ErasureCodeBenchmarkThroughput.java


> Erasure coding: a comprehensive I/O throughput benchmark tool
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9420) DiskBalancer : Add DataModels

2015-11-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003339#comment-15003339
 ] 

Anu Engineer commented on HDFS-9420:


I have looked at the unit tests failures and they are not related to this 
patch. As for asflicense failures looks like it is looking at file under build 
and test generated files. I will file a follow up JIRA with yetus


> DiskBalancer : Add DataModels
> -
>
> Key: HDFS-9420
> URL: https://issues.apache.org/jira/browse/HDFS-9420
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9420.001.patch
>
>
> Adds Data Model classes needed for Disk Balancer.  These classes allow us to 
> persist the physical model of the cluster. This is needed by the other parts 
> of Disk balancer like Planner and Executor engine



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-9421) NNThroughputBenchmark replication test NPE with -namenode option

2015-11-12 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-9421:
---

Assignee: Mingliang Liu

> NNThroughputBenchmark replication test NPE with -namenode option
> 
>
> Key: HDFS-9421
> URL: https://issues.apache.org/jira/browse/HDFS-9421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks
>Reporter: Xiaoyu Yao
>Assignee: Mingliang Liu
>
> Hit the following NPE when reviewing fix for HDFS-9387 with manual tests as 
> NNThroughputBenchmark currently does not have JUnit tests. 
>  
> {code}
> HW11217:centos6.4 xyao$ hadoop 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -op replication 
> -namenode hdfs://HW11217.local:9000
> 15/11/12 14:52:03 INFO namenode.NNThroughputBenchmark: Starting benchmark: 
> replication
> 15/11/12 14:52:03 ERROR namenode.NNThroughputBenchmark: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$ReplicationStats.generateInputs(NNThroughputBenchmark.java:1312)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:280)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1509)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1534)
> ...
> {code}
> However, the root cause is different from HDFS-9387.  
> From ReplicationStats#generateInputs, *nameNode* is uninitialized before use, 
> which causes the NPE.
> {code}
>   final FSNamesystem namesystem = nameNode.getNamesystem();
> {code}
> From NNThroughputBenchmark#run, nameNode is only initialized when -namenode 
> option is not specified. The fix is to initialize it properly in the else 
> block when -namenode option is specified OR we should block this if it is not 
> supported.
> {code}
>  if (namenodeUri == null) {
> nameNode = NameNode.createNameNode(argv, config);
> NamenodeProtocols nnProtos = nameNode.getRpcServer();
> nameNodeProto = nnProtos;
> clientProto = nnProtos;
> dataNodeProto = nnProtos;
> refreshUserMappingsProto = nnProtos;
> bpid = nameNode.getNamesystem().getBlockPoolId();
>   } else {
> FileSystem.setDefaultUri(getConf(), namenodeUri);
> DistributedFileSystem dfs = (DistributedFileSystem)
> FileSystem.get(getConf());
> final URI nnUri = new URI(namenodeUri);
> nameNodeProto = DFSTestUtil.getNamenodeProtocolProxy(config, nnUri,
> UserGroupInformation.getCurrentUser());
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9420) DiskBalancer : Add DataModels

2015-11-12 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003349#comment-15003349
 ] 

Anu Engineer commented on HDFS-9420:


Yetus JIRA https://issues.apache.org/jira/browse/YETUS-181

> DiskBalancer : Add DataModels
> -
>
> Key: HDFS-9420
> URL: https://issues.apache.org/jira/browse/HDFS-9420
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Attachments: HDFS-9420.001.patch
>
>
> Adds Data Model classes needed for Disk Balancer.  These classes allow us to 
> persist the physical model of the cluster. This is needed by the other parts 
> of Disk balancer like Planner and Executor engine



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003368#comment-15003368
 ] 

Xiao Chen commented on HDFS-9410:
-

Seems [jenkins run into some docker 
problems|https://builds.apache.org/job/PreCommit-HDFS-Build/13492/console]. 
Reattaching to trigger a new run...

> Fix all HDFS unit tests to correctly reset to sysout and syserr
> ---
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9410:

Status: Open  (was: Patch Available)

> Fix all HDFS unit tests to correctly reset to sysout and syserr
> ---
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9410:

Status: Patch Available  (was: Open)

> Fix all HDFS unit tests to correctly reset to sysout and syserr
> ---
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-9410:

Attachment: (was: HDFS-9410.003.patch)

> Fix all HDFS unit tests to correctly reset to sysout and syserr
> ---
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9421) NNThroughputBenchmark replication test NPE with -namenode option

2015-11-12 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-9421:


 Summary: NNThroughputBenchmark replication test NPE with -namenode 
option
 Key: HDFS-9421
 URL: https://issues.apache.org/jira/browse/HDFS-9421
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao


Hit the following NPE when reviewing fix for HDFS-9387 with manual tests as 
NNThroughputBenchmark currently does not have JUnit tests. 
 
{code}
HW11217:centos6.4 xyao$ hadoop 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -op replication 
-namenode hdfs://HW11217.local:9000
15/11/12 14:52:03 INFO namenode.NNThroughputBenchmark: Starting benchmark: 
replication
15/11/12 14:52:03 ERROR namenode.NNThroughputBenchmark: 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$ReplicationStats.generateInputs(NNThroughputBenchmark.java:1312)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:280)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1509)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1534)
...
{code}


However, the root cause is different from HDFS-9387.  
>From ReplicationStats#generateInputs, *nameNode* is uninitialized before use, 
>which causes the NPE.

{code}
  final FSNamesystem namesystem = nameNode.getNamesystem();
{code}

>From NNThroughputBenchmark#run, nameNode is only initialized when -namenode 
>option is not specified. The fix is to initialize it properly in the else 
>block when -namenode option is specified.

{code}
 if (namenodeUri == null) {
nameNode = NameNode.createNameNode(argv, config);
NamenodeProtocols nnProtos = nameNode.getRpcServer();
nameNodeProto = nnProtos;
clientProto = nnProtos;
dataNodeProto = nnProtos;
refreshUserMappingsProto = nnProtos;
bpid = nameNode.getNamesystem().getBlockPoolId();
  } else {
FileSystem.setDefaultUri(getConf(), namenodeUri);
DistributedFileSystem dfs = (DistributedFileSystem)
FileSystem.get(getConf());
final URI nnUri = new URI(namenodeUri);
nameNodeProto = DFSTestUtil.getNamenodeProtocolProxy(config, nnUri,
UserGroupInformation.getCurrentUser());
 
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8968) New benchmark throughput tool for striping erasure coding

2015-11-12 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HDFS-8968:
-
Attachment: HDFS-8968.5.patch

Update patch to avoid generating random buffers each time before writing to a 
file, which can be a hot spot method and interfere the results.

> New benchmark throughput tool for striping erasure coding
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7663) Erasure Coding: Append on striped file

2015-11-12 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7663:

Affects Version/s: 3.0.0
 Target Version/s: 3.0.0
   Status: Patch Available  (was: In Progress)

> Erasure Coding: Append on striped file
> --
>
> Key: HDFS-7663
> URL: https://issues.apache.org/jira/browse/HDFS-7663
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Walter Su
> Attachments: HDFS-7663.00.txt, HDFS-7663.01.patch
>
>
> Append should be easy if we have variable length block support from 
> HDFS-3689, i.e., the new data will be appended to a new block. We need to 
> revisit whether and how to support appending data to the original last block.
> 1. Append to a closed striped file, with NEW_BLOCK flag enabled (this)
> 2. Append to a under-construction striped file, with NEW_BLOCK flag enabled 
> (HDFS-9173)
> 3. Append to a striped file, by appending to last block group (follow-on)
> This jira attempts to implement the #1, and also track #2, #3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9422) Unnecessary FsDatasetImpl locking in DirectoryScanner cause periodic datanode pauses

2015-11-12 Thread He Liangliang (JIRA)
He Liangliang created HDFS-9422:
---

 Summary: Unnecessary FsDatasetImpl locking in DirectoryScanner 
cause periodic datanode pauses
 Key: HDFS-9422
 URL: https://issues.apache.org/jira/browse/HDFS-9422
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: He Liangliang
Assignee: He Liangliang


In DirectoryScanner, the scan call hold the global dataset lock for quite long 
time (typically > 5 secs for 500k blocks). This will stuck the client which 
need acquire this lock. For applications like HBase, this will affect the 
latency. In fact, this lock is unnecessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7984) webhdfs:// needs to support provided delegation tokens

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003608#comment-15003608
 ] 

Hadoop QA commented on HDFS-7984:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 42s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 7s 
{color} | {color:red} Patch generated 1 new checkstyle issues in root (total 
was 467, now 464). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 58s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 46s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 20s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 0s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch 

[jira] [Commented] (HDFS-9410) Some tests should always reset sysout and syserr

2015-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003619#comment-15003619
 ] 

Hudson commented on HDFS-9410:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8801 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8801/])
HDFS-9410. Some tests should always reset sysout and syserr. Contributed 
(waltersu4549: rev cccf88480b0df71f15fb36e9693d492c9e16c685)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdminWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetadataVersionOutput.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestTools.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotRename.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterId.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTraceAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestJMXGet.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFsShellPermission.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java


> Some tests should always reset sysout and syserr
> 
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9410) Some tests should always reset sysout and syserr

2015-11-12 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9410:

  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks [~xiaochen] for contribution.

> Some tests should always reset sysout and syserr
> 
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9338) in webhdfs Null pointer exception will be thrown when xattrname is not given

2015-11-12 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9338:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

Let's fix the same issues together at HDFS-9337.
Hi, [~jagadesh.kiran]. Do you mind merge this to HDFS-9337?

> in webhdfs Null pointer exception will be thrown when xattrname is not given
> 
>
> Key: HDFS-9338
> URL: https://issues.apache.org/jira/browse/HDFS-9338
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9338-00.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/?op=REMOVEXATTR=;
> or
>  curl -i -X PUT "http://10.19.92.127:50070/webhdfs/v1/kiran/?op=REMOVEXATTR;
> {code}
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":"XAttr
>  name cannot be null."}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9410) Some tests should always reset sysout and syserr

2015-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003661#comment-15003661
 ] 

Hudson commented on HDFS-9410:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #664 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/664/])
HDFS-9410. Some tests should always reset sysout and syserr. Contributed 
(waltersu4549: rev cccf88480b0df71f15fb36e9693d492c9e16c685)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterId.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestTools.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestJMXGet.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdminWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotRename.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTraceAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetadataVersionOutput.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFsShellPermission.java


> Some tests should always reset sysout and syserr
> 
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9410) Some tests should always reset sysout and syserr

2015-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003673#comment-15003673
 ] 

Hudson commented on HDFS-9410:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #676 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/676/])
HDFS-9410. Some tests should always reset sysout and syserr. Contributed 
(waltersu4549: rev cccf88480b0df71f15fb36e9693d492c9e16c685)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetadataVersionOutput.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterId.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFsShellPermission.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTraceAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestTools.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotRename.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestJMXGet.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdminWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java


> Some tests should always reset sysout and syserr
> 
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9410) Some tests should always reset sysout and syserr

2015-11-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003630#comment-15003630
 ] 

Xiao Chen commented on HDFS-9410:
-

Thanks Walter for the review and commit.

> Some tests should always reset sysout and syserr
> 
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9410) Some tests should always reset sysout and syserr

2015-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003639#comment-15003639
 ] 

Hudson commented on HDFS-9410:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1400 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1400/])
HDFS-9410. Some tests should always reset sysout and syserr. Contributed 
(waltersu4549: rev cccf88480b0df71f15fb36e9693d492c9e16c685)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestJMXGet.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdminWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFsShellPermission.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterId.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTraceAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetadataVersionOutput.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotRename.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestTools.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java


> Some tests should always reset sysout and syserr
> 
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7663) Erasure Coding: Append on striped file

2015-11-12 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7663:

Attachment: HDFS-7663.01.patch

> Erasure Coding: Append on striped file
> --
>
> Key: HDFS-7663
> URL: https://issues.apache.org/jira/browse/HDFS-7663
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Walter Su
> Attachments: HDFS-7663.00.txt, HDFS-7663.01.patch
>
>
> Append should be easy if we have variable length block support from 
> HDFS-3689, i.e., the new data will be appended to a new block. We need to 
> revisit whether and how to support appending data to the original last block.
> 1. Append to a closed striped file, with NEW_BLOCK flag enabled (this)
> 2. Append to a under-construction striped file, with NEW_BLOCK flag enabled 
> (HDFS-9173)
> 3. Append to a striped file, by appending to last block group (follow-on)
> This jira attempts to implement the #1, and also track #2, #3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9410) Some tests should always reset sysout and syserr

2015-11-12 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9410:

Summary: Some tests should always reset sysout and syserr  (was: Fix all 
HDFS unit tests to correctly reset to sysout and syserr)

> Some tests should always reset sysout and syserr
> 
>
> Key: HDFS-9410
> URL: https://issues.apache.org/jira/browse/HDFS-9410
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: HDFS
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9410.001.patch, HDFS-9410.002.patch, 
> HDFS-9410.003.patch
>
>
> Originally found in {{TestDFSAdminWithHA#tearDown}} where System.out.flush() 
> throws an NPE. 
> The cause is that if for any reason {{setUpHaCluster}} fails before storing 
> {{System.out}} and {{System.err}} as member variables {{originOut}} and 
> {{originErr}}, then in {{tearDown}} {{System.out}} and {{System.err}} are set 
> to null. This could cause all following tests to fail when calling {{flush}}.
> This jira tries to fix all similar occurrences of this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9333) Some tests using MiniDFSCluster errored complaining port in use

2015-11-12 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-9333:
---
Attachment: HDFS-9333.002.patch

I attached 002 which uses {{ServerSocketUtil#getPort}} to get random port 
number. Though there is small chance that given port number is used before 
binding it, this approach is simpler and avoid adding fix to real class.

> Some tests using MiniDFSCluster errored complaining port in use
> ---
>
> Key: HDFS-9333
> URL: https://issues.apache.org/jira/browse/HDFS-9333
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Kai Zheng
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-9333.001.patch, HDFS-9333.002.patch
>
>
> Ref. the following:
> {noformat}
> Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.483 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
> testRead(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped)
>   Time elapsed: 11.021 sec  <<< ERROR!
> java.net.BindException: Port in use: localhost:49333
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:884)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:821)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:675)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:883)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:862)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1555)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2015)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1996)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.doTestRead(TestBlockTokenWithDFS.java:539)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped.testRead(TestBlockTokenWithDFSStriped.java:62)
> {noformat}
> Another one:
> {noformat}
> Tests run: 5, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 9.859 sec <<< 
> FAILURE! - in org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
> testFailoverAndBackOnNNShutdown(org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController)
>   Time elapsed: 0.41 sec  <<< ERROR!
> java.net.BindException: Problem binding to [localhost:10021] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:469)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:695)
>   at org.apache.hadoop.ipc.Server.(Server.java:2464)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:399)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:742)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:680)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:883)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:862)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1555)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1245)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1014)
>   at 
> 

[jira] [Updated] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2015-11-12 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-9337:

Summary: Should check required params in WebHDFS to avoid NPE  (was: In 
webhdfs Nullpoint exception will be thrown in renamesnapshot when 
oldsnapshotname is not given)

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9337) Should check required params in WebHDFS to avoid NPE

2015-11-12 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003637#comment-15003637
 ] 

Walter Su commented on HDFS-9337:
-

Close HDFS-9338 as duplicated.
Update JIRA description, upgrade it to solve more general, same type issues.

> Should check required params in WebHDFS to avoid NPE
> 
>
> Key: HDFS-9337
> URL: https://issues.apache.org/jira/browse/HDFS-9337
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jagadesh Kiran N
>Assignee: Jagadesh Kiran N
> Attachments: HDFS-9337_00.patch, HDFS-9337_01.patch, 
> HDFS-9337_02.patch
>
>
> {code}
>  curl -i -X PUT 
> "http://10.19.92.127:50070/webhdfs/v1/kiran/sreenu?op=RENAMESNAPSHOT=SNAPSHOTNAME;
> {code}
> Null point exception will be thrown
> {code}
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7663) Erasure Coding: Append on striped file

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003652#comment-15003652
 ] 

Hadoop QA commented on HDFS-7663:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} Patch generated 3 new checkstyle issues in 
hadoop-hdfs-project (total was 31, now 34). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 13s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 6s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 206m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | 

[jira] [Updated] (HDFS-9358) TestNodeCount#testNodeCount timed out

2015-11-12 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-9358:
---
Status: Patch Available  (was: Open)

> TestNodeCount#testNodeCount timed out
> -
>
> Key: HDFS-9358
> URL: https://issues.apache.org/jira/browse/HDFS-9358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
> Attachments: HDFS-9358.001.patch
>
>
> I have seen this test failure occurred a few times in trunk:
> Error Message
> Timeout: excess replica count not equal to 2 for block blk_1073741825_1001 
> after 2 msec.  Last counts: live = 2, excess = 0, corrupt = 0
> Stacktrace
> java.util.concurrent.TimeoutException: Timeout: excess replica count not 
> equal to 2 for block blk_1073741825_1001 after 2 msec.  Last counts: live 
> = 2, excess = 0, corrupt = 0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:152)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:146)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.__CLR4_0_39bdgm666uf(TestNodeCount.java:130)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:54)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9144) Refactor libhdfs into stateful/ephemeral objects

2015-11-12 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9144:
-
Status: Patch Available  (was: Open)

> Refactor libhdfs into stateful/ephemeral objects
> 
>
> Key: HDFS-9144
> URL: https://issues.apache.org/jira/browse/HDFS-9144
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9144.HDFS-8707.001.patch, 
> HDFS-9144.HDFS-8707.002.patch
>
>
> In discussion for other efforts, we decided that we should separate several 
> concerns:
> * A posix-like FileSystem/FileHandle object (stream-based, positional reads)
> * An ephemeral ReadOperation object that holds the state for 
> reads-in-progress, which consumes
> * An immutable FileInfo object which holds the block map and file size (and 
> other metadata about the file that we assume will not change over the life of 
> the file)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9358) TestNodeCount#testNodeCount timed out

2015-11-12 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-9358:
---
Attachment: HDFS-9358.001.patch

Thanks for reporting this, [~jojochuang].

The testNodeCount expects number of excess replica to be increased to 2 by 
excessReplicateMap. (live, excess) could be changed in the case as
{noformat}
  (live, excess): (3, 1) -> (2, 2)
{noformat}

If invalidation of existing excess replica is executed before 
excessReplicateMap is updated, number of excess replica never be 2.
{noformat}
  (live, excess): (3, 1) -> (3, 0) -> (2, 1)
{noformat}

Attached 001 fix the test to wait for invalidation of the 1st excess replica 
then check the 2nd excess replica is detected.


> TestNodeCount#testNodeCount timed out
> -
>
> Key: HDFS-9358
> URL: https://issues.apache.org/jira/browse/HDFS-9358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Masatake Iwasaki
> Attachments: HDFS-9358.001.patch
>
>
> I have seen this test failure occurred a few times in trunk:
> Error Message
> Timeout: excess replica count not equal to 2 for block blk_1073741825_1001 
> after 2 msec.  Last counts: live = 2, excess = 0, corrupt = 0
> Stacktrace
> java.util.concurrent.TimeoutException: Timeout: excess replica count not 
> equal to 2 for block blk_1073741825_1001 after 2 msec.  Last counts: live 
> = 2, excess = 0, corrupt = 0
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:152)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:146)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.__CLR4_0_39bdgm666uf(TestNodeCount.java:130)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:54)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9408) extend the build system to produce static and dynamic libhdfspp libs

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002147#comment-15002147
 ] 

Hadoop QA commented on HDFS-9408:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
20s {color} | {color:green} HDFS-8707 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 13s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 13s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 13s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 13s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 12s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 13s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 425 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 8m 14s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-12 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771972/HDFS-9408.HDFS-8707.005.patch
 |
| JIRA Issue | HDFS-9408 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 10942c983ffd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-fa12328/precommit/personality/hadoop.sh
 |
| git revision | HDFS-8707 / 3ce4230 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13486/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_60.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13486/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_79.txt
 |
| compile | 

[jira] [Commented] (HDFS-9144) Refactor libhdfs into stateful/ephemeral objects

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002148#comment-15002148
 ] 

Hadoop QA commented on HDFS-9144:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
32s {color} | {color:green} HDFS-8707 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 26s 
{color} | {color:red} root in HDFS-8707 failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 18s 
{color} | {color:red} root in HDFS-8707 failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
0s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 0s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 0s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
0s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 33s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 1m 33s {color} | 
{color:red} root in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 33s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 29s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 1m 29s {color} | 
{color:red} root in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 29s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
0s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 26s 
{color} | {color:red} Patch generated 426 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 57s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-12 |
| JIRA Issue | HDFS-9144 |
| GITHUB PR | https://github.com/apache/hadoop/pull/43 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux be7da899a6d3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build@2/patchprocess/apache-yetus-fa12328/precommit/personality/hadoop.sh
 |
| git revision | HDFS-8707 / 3ce4230 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13485/artifact/patchprocess/branch-compile-root-jdk1.8.0_60.txt
 |
| compile | 

[jira] [Updated] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-11-12 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9117:
-
Attachment: HDFS-9117.HDFS-8707.014.patch

Moved to use std::experimental::optional as submitted for HDFS-9419

> Config file reader / options classes for libhdfs++
> --
>
> Key: HDFS-9117
> URL: https://issues.apache.org/jira/browse/HDFS-9117
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: HDFS-8707
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9117.HDFS-8707.001.patch, 
> HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch, 
> HDFS-9117.HDFS-8707.004.patch, HDFS-9117.HDFS-8707.005.patch, 
> HDFS-9117.HDFS-8707.006.patch, HDFS-9117.HDFS-8707.008.patch, 
> HDFS-9117.HDFS-8707.009.patch, HDFS-9117.HDFS-8707.010.patch, 
> HDFS-9117.HDFS-8707.011.patch, HDFS-9117.HDFS-8707.012.patch, 
> HDFS-9117.HDFS-8707.013.patch, HDFS-9117.HDFS-8707.014.patch, 
> HDFS-9117.HDFS-9288.007.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be 
> able to read the configurations from Hadoop XML files and behave in line with 
> the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML 
> configuration files.
> Similarly, an internal Options architecture for libhdfs++ should be developed 
> to efficiently transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9117) Config file reader / options classes for libhdfs++

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002058#comment-15002058
 ] 

Hadoop QA commented on HDFS-9117:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
41s {color} | {color:green} HDFS-8707 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 16s 
{color} | {color:red} hadoop-hdfs-native-client in HDFS-8707 failed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 14s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 14s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-hdfs-native-client in the patch failed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 14s {color} | 
{color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 14s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 16s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 14s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 25s 
{color} | {color:red} Patch generated 425 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 9m 29s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.0 Server=1.7.0 
Image:test-patch-base-hadoop-date2015-11-12 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771965/HDFS-9117.HDFS-8707.014.patch
 |
| JIRA Issue | HDFS-9117 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 13836cd82857 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/apache-yetus-fa12328/precommit/personality/hadoop.sh
 |
| git revision | HDFS-8707 / 3ce4230 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13483/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_66.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/13483/artifact/patchprocess/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_79.txt
 |
| compile | 

[jira] [Updated] (HDFS-9408) extend the build system to produce static and dynamic libhdfspp libs

2015-11-12 Thread Stephen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen updated HDFS-9408:
--
Attachment: HDFS-9408.HDFS-8707.005.patch

Thanks for the suggestions. Updated patch is attached.

{quote}It makes sense to play around with the order of the HDFSPP_SUBLIBS to 
avoid the linker flags hacks.{quote}
I added a comment for this as well. It isn't about the order of the libs, 
--whole-archive is required to link static libraries into a shared one.

> extend the build system to produce static and dynamic libhdfspp libs
> 
>
> Key: HDFS-9408
> URL: https://issues.apache.org/jira/browse/HDFS-9408
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Stephen
>Assignee: Stephen
> Fix For: HDFS-8707
>
> Attachments: HDFS-9408.HDFS-8707.001.patch, 
> HDFS-9408.HDFS-8707.002.patch, HDFS-9408.HDFS-8707.003.patch, 
> HDFS-9408.HDFS-8707.004.patch, HDFS-9408.HDFS-8707.005.patch
>
>
> Generate static and dynamic libhdfspp libraries for use by other applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) New benchmark throughput tool for striping erasure coding

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002152#comment-15002152
 ] 

Hadoop QA commented on HDFS-8968:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 23s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 58 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 148m 1s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.namenode.TestRecoverStripedBlocks |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.TestRenameWhileOpen |
| JDK v1.7.0_79 Failed junit tests | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 

[jira] [Updated] (HDFS-9368) Implement reads with implicit offset state in libhdfs++

2015-11-12 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9368:
--
Attachment: HDFS-9368.HDFS-8707.001.patch

New patch:
-Added bounds checks, turns out InputStreamImpl had everything needed.
-Added hdfsTell to C API in addition to hdfsRead and hdfsSeek
-Added whence argument for FileHandle::seek to make seek act like posix.

> Implement reads with implicit offset state in libhdfs++
> ---
>
> Key: HDFS-9368
> URL: https://issues.apache.org/jira/browse/HDFS-9368
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9368.HDFS-8707.000.patch, 
> HDFS-9368.HDFS-8707.001.patch
>
>
> Currently only positional reads are supported.  Implement a stateful read 
> that keeps track of offset into file.  Also expose it in the c bindings as 
> hdfsRead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) New benchmark throughput tool for striping erasure coding

2015-11-12 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002690#comment-15002690
 ] 

Zhe Zhang commented on HDFS-8968:
-

Great work Rui! The latest patch LGTM (thanks for adding the UT!).  +1 and I 
will commit it to trunk shortly.

I see the following items that we can do as follow-ons:
*Tests to add*
# Pread mode can add support for reading from random file offsets
# We can test {{seek}} in non-positional read
# We can test different buffer sizes in reading
# Similarly, small writes is also an important scenario to test
# Another important test case is slow writer (e.g., Flume). We can add delays 
between write requests

*Code optimizations*
# The last paragraph ("For example xxx") of the Javadoc in 
{{ErasureCodeBenchmarkThroughput}} can be converted to a real example of 
commands to run
# In the Javadoc we should also specify the requirement on clusters. IIUC we 
need to have at least 9 DNs
# Maybe also add the logic to clear OS page cache after each run? I think at 
least clearing client page cache is doable in 
{{ErasureCodeBenchmarkThroughput}}. Clearing DN page cache is harder.
# I imagine {{ErasureCodeBenchmarkThroughput}} will be used together with other 
scripts to automatically repeat test multiple times, interpret results etc. So 
{{TestErasureCodeBenchmarkThroughput}} can verify it generates expected outputs.

> New benchmark throughput tool for striping erasure coding
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8968) New benchmark throughput tool for striping erasure coding

2015-11-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8968:

Affects Version/s: 3.0.0
 Target Version/s: 3.0.0
  Component/s: test
   erasure-coding

> New benchmark throughput tool for striping erasure coding
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8968) Erasure coding: a comprehensive I/O throughput benchmark tool

2015-11-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8968:

Summary: Erasure coding: a comprehensive I/O throughput benchmark tool  
(was: New benchmark throughput tool for striping erasure coding)

> Erasure coding: a comprehensive I/O throughput benchmark tool
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8968) Erasure coding: a comprehensive I/O throughput benchmark tool

2015-11-12 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8968:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Rui for the work and Kai/Rakesh for the reviews!

> Erasure coding: a comprehensive I/O throughput benchmark tool
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7663) Erasure Coding: Append on striped file

2015-11-12 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7663:

Description: 
Append should be easy if we have variable length block support from HDFS-3689, 
i.e., the new data will be appended to a new block. We need to revisit whether 
and how to support appending data to the original last block.

1. Append to a closed striped file, with NEW_BLOCK flag enabled (this)
2. Append to a under-construction striped file, with NEW_BLOCK flag enabled 
(HDFS-9173)
3. Append to a striped file, by appending to last block group (follow-on)

This jira attempts to implement the #1, and also track #2, #3.

  was:Append should be easy if we have variable length block support from 
HDFS-3689, i.e., the new data will be appended to a new block. We need to 
revisit whether and how to support appending data to the original last block.


> Erasure Coding: Append on striped file
> --
>
> Key: HDFS-7663
> URL: https://issues.apache.org/jira/browse/HDFS-7663
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Walter Su
> Attachments: HDFS-7663.00.txt
>
>
> Append should be easy if we have variable length block support from 
> HDFS-3689, i.e., the new data will be appended to a new block. We need to 
> revisit whether and how to support appending data to the original last block.
> 1. Append to a closed striped file, with NEW_BLOCK flag enabled (this)
> 2. Append to a under-construction striped file, with NEW_BLOCK flag enabled 
> (HDFS-9173)
> 3. Append to a striped file, by appending to last block group (follow-on)
> This jira attempts to implement the #1, and also track #2, #3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9410) Fix all HDFS unit tests to correctly reset to sysout and syserr

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003547#comment-15003547
 ] 

Hadoop QA commented on HDFS-9410:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 28s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 29s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 28s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 188m 26s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
|   | 

[jira] [Commented] (HDFS-9239) DataNode Lifeline Protocol: an alternative protocol for reporting DataNode liveness

2015-11-12 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002293#comment-15002293
 ] 

Kihwal Lee commented on HDFS-9239:
--

bq. NN heartbeat processing with a lockless + tryLock implementation would make 
it ideally suited for the existing client and/or service servers.
NN should still enforce a max number of skips and guarantee commands are sent 
in bounded time. Replication or block recovery is done through an asynchronous 
protocol, but oftentimes clients expect them to be done "soon". 

> DataNode Lifeline Protocol: an alternative protocol for reporting DataNode 
> liveness
> ---
>
> Key: HDFS-9239
> URL: https://issues.apache.org/jira/browse/HDFS-9239
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: DataNode-Lifeline-Protocol.pdf, HDFS-9239.001.patch
>
>
> This issue proposes introduction of a new feature: the DataNode Lifeline 
> Protocol.  This is an RPC protocol that is responsible for reporting liveness 
> and basic health information about a DataNode to a NameNode.  Compared to the 
> existing heartbeat messages, it is lightweight and not prone to resource 
> contention problems that can harm accurate tracking of DataNode liveness 
> currently.  The attached design document contains more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9419) Add Optional class to libhdfs++

2015-11-12 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002298#comment-15002298
 ] 

James Clampffer commented on HDFS-9419:
---

Not much to it.  Looks good to me.

+1

> Add Optional class to libhdfs++
> ---
>
> Key: HDFS-9419
> URL: https://issues.apache.org/jira/browse/HDFS-9419
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9419.HDFS-8707.diff
>
>
> Copy from 
> https://raw.githubusercontent.com/akrzemi1/Optional/master/optional.hpp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9328) Formalize coding standards for libhdfs++ and put them in a README.txt

2015-11-12 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9328:
--
Attachment: HDFS-9328.HDFS-8707.004.patch

New patch:
-added ASF license like [~ste...@apache.org] suggested.  It's in a markup 
comment to be more like the example [~wheat9] posted.
-got rid of ASF exclusion from pom now that the header is back.
-Changed a lot of the formatting to look more like [~wheat9] wanted. 

> Formalize coding standards for libhdfs++ and put them in a README.txt
> -
>
> Key: HDFS-9328
> URL: https://issues.apache.org/jira/browse/HDFS-9328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Attachments: HDFS-9328.HDFS-8707.000.patch, 
> HDFS-9328.HDFS-8707.001.patch, HDFS-9328.HDFS-8707.002.patch, 
> HDFS-9328.HDFS-8707.003.patch, HDFS-9328.HDFS-8707.004.patch
>
>
> We have 2-3 people working on this project full time and hopefully more 
> people will start contributing.  In order to efficiently scale we need a 
> single, easy to find, place where developers can check to make sure they are 
> following the coding standards of this project to both save their time and 
> save the time of people doing code reviews.
> The most practical place to do this seems like a README file in libhdfspp/. 
> The foundation of the standards is google's C++ guide found here: 
> https://google-styleguide.googlecode.com/svn/trunk/cppguide.html
> Any exceptions to google's standards or additional restrictions need to be 
> explicitly enumerated so there is one single point of reference for all 
> libhdfs++ code standards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9414) Moving reconfiguration protocol out of ClientDatanodeProtocol for reusability

2015-11-12 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9414:

Attachment: HDFS-9414.001.patch

Posted patch v001, kindly review.

> Moving reconfiguration protocol out of ClientDatanodeProtocol for reusability
> -
>
> Key: HDFS-9414
> URL: https://issues.apache.org/jira/browse/HDFS-9414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9414.001.patch
>
>
> Since reconfiguration protocol is reused by both DataNode and NameNode, this 
> work proposes to move it out of ClientDatanodeProtocol into dedicated 
> ReconfigurationProtocol. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9419) Add Optional class to libhdfs++

2015-11-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002546#comment-15002546
 ] 

Haohui Mai commented on HDFS-9419:
--

s/TR2/tr2 so that it looks consistent under both Windows and UNIX. +1 once 
addressed.

The RAT warnings can be addressed separately in HDFS-9417.



> Add Optional class to libhdfs++
> ---
>
> Key: HDFS-9419
> URL: https://issues.apache.org/jira/browse/HDFS-9419
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9419.HDFS-8707.diff
>
>
> Copy from 
> https://raw.githubusercontent.com/akrzemi1/Optional/master/optional.hpp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9328) Formalize coding standards for libhdfs++

2015-11-12 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9328:
-
Summary: Formalize coding standards for libhdfs++  (was: Formalize coding 
standards for libhdfs++ and put them in a README.txt)

> Formalize coding standards for libhdfs++
> 
>
> Key: HDFS-9328
> URL: https://issues.apache.org/jira/browse/HDFS-9328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Attachments: HDFS-9328.HDFS-8707.000.patch, 
> HDFS-9328.HDFS-8707.001.patch, HDFS-9328.HDFS-8707.002.patch, 
> HDFS-9328.HDFS-8707.003.patch, HDFS-9328.HDFS-8707.004.patch
>
>
> We have 2-3 people working on this project full time and hopefully more 
> people will start contributing.  In order to efficiently scale we need a 
> single, easy to find, place where developers can check to make sure they are 
> following the coding standards of this project to both save their time and 
> save the time of people doing code reviews.
> The most practical place to do this seems like a README file in libhdfspp/. 
> The foundation of the standards is google's C++ guide found here: 
> https://google-styleguide.googlecode.com/svn/trunk/cppguide.html
> Any exceptions to google's standards or additional restrictions need to be 
> explicitly enumerated so there is one single point of reference for all 
> libhdfs++ code standards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9414) Moving reconfiguration protocol out of ClientDatanodeProtocol for reusability

2015-11-12 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9414:

Status: Patch Available  (was: Open)

> Moving reconfiguration protocol out of ClientDatanodeProtocol for reusability
> -
>
> Key: HDFS-9414
> URL: https://issues.apache.org/jira/browse/HDFS-9414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9414.001.patch
>
>
> Since reconfiguration protocol is reused by both DataNode and NameNode, this 
> work proposes to move it out of ClientDatanodeProtocol into dedicated 
> ReconfigurationProtocol. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9328) Formalize coding standards for libhdfs++ and put them in a README.txt

2015-11-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002551#comment-15002551
 ] 

Haohui Mai commented on HDFS-9328:
--

+1. I'll commit  it shortly.

> Formalize coding standards for libhdfs++ and put them in a README.txt
> -
>
> Key: HDFS-9328
> URL: https://issues.apache.org/jira/browse/HDFS-9328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Attachments: HDFS-9328.HDFS-8707.000.patch, 
> HDFS-9328.HDFS-8707.001.patch, HDFS-9328.HDFS-8707.002.patch, 
> HDFS-9328.HDFS-8707.003.patch, HDFS-9328.HDFS-8707.004.patch
>
>
> We have 2-3 people working on this project full time and hopefully more 
> people will start contributing.  In order to efficiently scale we need a 
> single, easy to find, place where developers can check to make sure they are 
> following the coding standards of this project to both save their time and 
> save the time of people doing code reviews.
> The most practical place to do this seems like a README file in libhdfspp/. 
> The foundation of the standards is google's C++ guide found here: 
> https://google-styleguide.googlecode.com/svn/trunk/cppguide.html
> Any exceptions to google's standards or additional restrictions need to be 
> explicitly enumerated so there is one single point of reference for all 
> libhdfs++ code standards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9328) Formalize coding standards for libhdfs++

2015-11-12 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9328:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-8707
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~James Clampffer] for 
the contribution.

> Formalize coding standards for libhdfs++
> 
>
> Key: HDFS-9328
> URL: https://issues.apache.org/jira/browse/HDFS-9328
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Blocker
> Fix For: HDFS-8707
>
> Attachments: HDFS-9328.HDFS-8707.000.patch, 
> HDFS-9328.HDFS-8707.001.patch, HDFS-9328.HDFS-8707.002.patch, 
> HDFS-9328.HDFS-8707.003.patch, HDFS-9328.HDFS-8707.004.patch
>
>
> We have 2-3 people working on this project full time and hopefully more 
> people will start contributing.  In order to efficiently scale we need a 
> single, easy to find, place where developers can check to make sure they are 
> following the coding standards of this project to both save their time and 
> save the time of people doing code reviews.
> The most practical place to do this seems like a README file in libhdfspp/. 
> The foundation of the standards is google's C++ guide found here: 
> https://google-styleguide.googlecode.com/svn/trunk/cppguide.html
> Any exceptions to google's standards or additional restrictions need to be 
> explicitly enumerated so there is one single point of reference for all 
> libhdfs++ code standards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-12 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002422#comment-15002422
 ] 

Haohui Mai commented on HDFS-7163:
--

Sure. Will look into it next week.

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.7.003.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9358) TestNodeCount#testNodeCount timed out

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002355#comment-15002355
 ] 

Hadoop QA commented on HDFS-9358:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 24s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 36s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 26s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 189m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | 

[jira] [Updated] (HDFS-7163) WebHdfsFileSystem should retry reads according to the configured retry policy.

2015-11-12 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-7163:
-
Attachment: HDFS-7163-branch-2.7.003.patch
HDFS-7163-branch-2.003.patch

As documented above, the unit test errors are not occurring for me in my local 
build environment.

Attaching branch-2 and branch-2.7 patches. Although I named them according to 
the naming convention documented 
[here|http://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch], the 
build will still try to apply them to trunk, so the corresponding HadoopQA 
message will indicate a build failure.

[~wheat9], [~daryn], can you please take a look at this patch? Thank you.

> WebHdfsFileSystem should retry reads according to the configured retry policy.
> --
>
> Key: HDFS-7163
> URL: https://issues.apache.org/jira/browse/HDFS-7163
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.5.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-7163-branch-2.003.patch, 
> HDFS-7163-branch-2.7.003.patch, HDFS-7163.001.patch, HDFS-7163.002.patch, 
> HDFS-7163.003.patch, WebHDFS Read Retry.pdf
>
>
> In the current implementation of WebHdfsFileSystem, opens are retried 
> according to the configured retry policy, but not reads. Therefore, if a 
> connection goes down while data is being read, the read will fail and the 
> read will have to be retried by the client code.
> Also, after a connection has been established, the next read (or seek/read) 
> will fail and the read will have to be restarted by the client code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8412) Fix the test failures in HTTPFS: In some tests setReplication called after fs close.

2015-11-12 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002396#comment-15002396
 ] 

Yongjun Zhang commented on HDFS-8412:
-

Hi [~umamaheswararao],

Thanks for your earlier work here. It seems you only committed to trunk but not 
branch-2, may I know if there is any concern so it's not done? I can 
cherry-pick to branch-2 if there is no concern. Thanks.



> Fix the test failures in HTTPFS: In some tests setReplication called after fs 
> close.
> 
>
> Key: HDFS-8412
> URL: https://issues.apache.org/jira/browse/HDFS-8412
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: 3.0.0
>
> Attachments: HDFS-8412-0.patch
>
>
> Currently 2 HTTFS test cases failing due to filesystem open check in fs 
> operations
> This is the JIRA fix these failures.
> Failure seems like 
> test case is closing fs first and then doing operation. Ideally such test 
> could pas earlier as dfsClient was just contacting with NN directly. But that 
> particular closed client will not be useful for any other ops like 
> read/write. So, usage should be corrected here IMO.
> {code}
>  fs.close();
> fs.setReplication(path, (short) 2);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9408) extend the build system to produce static and dynamic libhdfspp libs

2015-11-12 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002302#comment-15002302
 ] 

James Clampffer commented on HDFS-9408:
---

Looks good to me.

+1

> extend the build system to produce static and dynamic libhdfspp libs
> 
>
> Key: HDFS-9408
> URL: https://issues.apache.org/jira/browse/HDFS-9408
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Stephen
>Assignee: Stephen
> Fix For: HDFS-8707
>
> Attachments: HDFS-9408.HDFS-8707.001.patch, 
> HDFS-9408.HDFS-8707.002.patch, HDFS-9408.HDFS-8707.003.patch, 
> HDFS-9408.HDFS-8707.004.patch, HDFS-9408.HDFS-8707.005.patch
>
>
> Generate static and dynamic libhdfspp libraries for use by other applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8770) ReplicationMonitor thread received Runtime exception: NullPointerException when BlockManager.chooseExcessReplicates

2015-11-12 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002599#comment-15002599
 ] 

Xiao Chen commented on HDFS-8770:
-

Hi [~aderen],
Thanks for reporting the issue and providing a patch. The fix makes sense to me.
Could you add a unit test to reproduce the scenario that you're trying to fix?

> ReplicationMonitor thread received Runtime exception: NullPointerException 
> when BlockManager.chooseExcessReplicates
> ---
>
> Key: HDFS-8770
> URL: https://issues.apache.org/jira/browse/HDFS-8770
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: ade
>Assignee: ade
>Priority: Critical
> Attachments: HDFS-8770_v1.patch
>
>
> Namenode shutdown when ReplicationMonitor thread received Runtime exception:
> {quote}
> 2015-07-08 16:43:55,167 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.adjustSetsWithChosenReplica(BlockPlacementPolicy.java:189)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseExcessReplicates(BlockManager.java:2911)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processOverReplicatedBlock(BlockManager.java:2849)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processMisReplicatedBlock(BlockManager.java:2780)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.rescanPostponedMisreplicatedBlocks(BlockManager.java:1931)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3628)
> at java.lang.Thread.run(Thread.java:744)
> {quote}
> We use hadoop-2.6.0 configured with heterogeneous storages and 
> setStoragePolicy some path One_SSD. When a block has excess replicated like 2 
> SSD replica on different rack(exactlyOne set) and 2 Disk on same 
> rack(moreThanOne set), BlockPlacementPolicyDefault.chooseReplicaToDelete 
> return null because only moreThanOne set be chosen to find SSD replica



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9419) Import the optional library into libhdfs++

2015-11-12 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-9419:
-
Summary: Import the optional library into libhdfs++  (was: Add Optional 
class to libhdfs++)

> Import the optional library into libhdfs++
> --
>
> Key: HDFS-9419
> URL: https://issues.apache.org/jira/browse/HDFS-9419
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Fix For: HDFS-8707
>
> Attachments: HDFS-9419.HDFS-8707.002.patch, HDFS-9419.HDFS-8707.diff
>
>
> Copy from 
> https://raw.githubusercontent.com/akrzemi1/Optional/master/optional.hpp



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9421) NNThroughputBenchmark replication test NPE with -namenode option

2015-11-12 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-9421:
-
Component/s: benchmarks

> NNThroughputBenchmark replication test NPE with -namenode option
> 
>
> Key: HDFS-9421
> URL: https://issues.apache.org/jira/browse/HDFS-9421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: benchmarks
>Reporter: Xiaoyu Yao
>
> Hit the following NPE when reviewing fix for HDFS-9387 with manual tests as 
> NNThroughputBenchmark currently does not have JUnit tests. 
>  
> {code}
> HW11217:centos6.4 xyao$ hadoop 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -op replication 
> -namenode hdfs://HW11217.local:9000
> 15/11/12 14:52:03 INFO namenode.NNThroughputBenchmark: Starting benchmark: 
> replication
> 15/11/12 14:52:03 ERROR namenode.NNThroughputBenchmark: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$ReplicationStats.generateInputs(NNThroughputBenchmark.java:1312)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$OperationStatsBase.benchmark(NNThroughputBenchmark.java:280)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.run(NNThroughputBenchmark.java:1509)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.main(NNThroughputBenchmark.java:1534)
> ...
> {code}
> However, the root cause is different from HDFS-9387.  
> From ReplicationStats#generateInputs, *nameNode* is uninitialized before use, 
> which causes the NPE.
> {code}
>   final FSNamesystem namesystem = nameNode.getNamesystem();
> {code}
> From NNThroughputBenchmark#run, nameNode is only initialized when -namenode 
> option is not specified. The fix is to initialize it properly in the else 
> block when -namenode option is specified.
> {code}
>  if (namenodeUri == null) {
> nameNode = NameNode.createNameNode(argv, config);
> NamenodeProtocols nnProtos = nameNode.getRpcServer();
> nameNodeProto = nnProtos;
> clientProto = nnProtos;
> dataNodeProto = nnProtos;
> refreshUserMappingsProto = nnProtos;
> bpid = nameNode.getNamesystem().getBlockPoolId();
>   } else {
> FileSystem.setDefaultUri(getConf(), namenodeUri);
> DistributedFileSystem dfs = (DistributedFileSystem)
> FileSystem.get(getConf());
> final URI nnUri = new URI(namenodeUri);
> nameNodeProto = DFSTestUtil.getNamenodeProtocolProxy(config, nnUri,
> UserGroupInformation.getCurrentUser());
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9387) Parse namenodeUri parameter only once in NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()

2015-11-12 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003280#comment-15003280
 ] 

Mingliang Liu commented on HDFS-9387:
-

Thanks for your comment. It's a good catch. I realized that I did not include 
{{replication}} operation to {{-op all}} when I test it locally. My concern was 
that when {{-namenode}} is specified, the name node is running as a standalone 
process which means it's hard, if possible, to get access to its internal state 
such as {{FSNamesystem}} and {{BlockManager}}.

Is it possible to run the {{-namenode}} arguments for {{replication}} test? 
Hm... if it's not, the {{-namenode}} and {{-op all}} conflict with each other.

> Parse namenodeUri parameter only once in 
> NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()
> 
>
> Key: HDFS-9387
> URL: https://issues.apache.org/jira/browse/HDFS-9387
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9387.000.patch
>
>
> In {{NNThroughputBenchmark$OperationStatsBase#verifyOpArgument()}}, the   
> {{namenodeUri}} is always parsed from {{-namenode}} argument. This works just 
> fine if the {{-op}} parameter is not {{all}}, as the single benchmark will 
> need to parse the {{namenodeUri}} from args anyway.
> When the {{-op}} is {{all}}, namely all sub-benchmark will run, multiple 
> sub-benchmark will call the {{verifyOpArgument()}} method. In this case, the 
> first sub-benchmark reads the {{namenode}} argument and removes it from args. 
> The other sub-benchmarks will thereafter read {{null}} value since the 
> argument is removed. This contradicts the intension of providing {{namenode}} 
> for all sub-benchmarks.
> {code:title=current code}
>   try {
> namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
>   } catch (IllegalArgumentException iae) {
> printUsage();
>   }
> {code}
> The fix is to parse the {{namenodeUri}}, which is shared by all 
> sub-benchmarks, from {{-namenode}} argument only once. This follows the 
> convention of parsing other global arguments in 
> {{OperationStatsBase#verifyOpArgument()}}.
> {code:title=simple fix}
>   if (args.indexOf("-namenode") >= 0) {
> try {
>   namenodeUri = StringUtils.popOptionWithArgument("-namenode", args);
> } catch (IllegalArgumentException iae) {
>   printUsage();
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8968) Erasure coding: a comprehensive I/O throughput benchmark tool

2015-11-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003312#comment-15003312
 ] 

Hudson commented on HDFS-8968:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2539 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2539/])
HDFS-8968. Erasure coding: a comprehensive I/O throughput benchmark (zhz: rev 
7b00c8e20ee62885097c5e63f110b9eece8ce6b3)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodeBenchmarkThroughput.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ErasureCodeBenchmarkThroughput.java


> Erasure coding: a comprehensive I/O throughput benchmark tool
> -
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HDFS-8968-HDFS-7285.1.patch, 
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch, 
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and 
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local 
> environment impact, like local disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9420) DiskBalancer : Add DataModels

2015-11-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003325#comment-15003325
 ] 

Hadoop QA commented on HDFS-9420:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 7s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 24s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 7s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 144m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   |