[jira] [Commented] (HADOOP-12962) KMS key names are correctly encoded when creating key

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211421#comment-15211421
 ] 

Hadoop QA commented on HADOOP-12962:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-common-project/hadoop-kms: patch generated 1 new + 
6 unchanged - 0 fixed = 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 59s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 3s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 18s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795337/HADOOP-12962.01.patch 
|
| JIRA Issue | HADOOP-12962 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7509811ed95b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2e1d0ff |
| Default Java 

[jira] [Commented] (HADOOP-12378) Fix findbugs warnings in hadoop-tools module

2016-03-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211416#comment-15211416
 ] 

Akira AJISAKA commented on HADOOP-12378:


Hi [~ozawa], would you review the latest patch?

> Fix findbugs warnings in hadoop-tools module
> 
>
> Key: HADOOP-12378
> URL: https://issues.apache.org/jira/browse/HADOOP-12378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12378.001.patch, HADOOP-12378.002.patch, 
> findbugsAnt.html, findbugsDatajoin.html
>
>
> There are 2 warnings in hadoop-datajoin module and 4 warnings in hadoop-ant 
> module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10965) Incorrect error message by fs -copyFromLocal

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211415#comment-15211415
 ] 

Hadoop QA commented on HADOOP-10965:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 10s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 8s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795323/HADOOP-10965.002.patch
 |
| JIRA Issue | HADOOP-10965 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 88e6b95a1f09 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12958) PhantomReference for filesystem statistics can trigger OOM

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211410#comment-15211410
 ] 

Hadoop QA commented on HADOOP-12958:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 21s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 57s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 30s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795324/HADOOP-12958.01.patch 
|
| JIRA Issue | HADOOP-12958 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6385ac1576f0 3.13.0-36-lowlatency 

[jira] [Updated] (HADOOP-11138) Stream yarn daemon and container logs through log4j

2016-03-24 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HADOOP-11138:
-
Assignee: Ying Zhang

> Stream yarn daemon and container logs through log4j
> ---
>
> Key: HADOOP-11138
> URL: https://issues.apache.org/jira/browse/HADOOP-11138
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.6.0
>Reporter: shreyas subramanya
>Assignee: Ying Zhang
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11138-1.patch, HADOOP-11138-trunk.patch, 
> HADOOP-11138.patch
>
>
> Resource manager, node manager, history server, application master and other 
> container syslogs can be streamed by configuring a log4j SocketAppender in 
> the root loggers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12378) Fix findbugs warnings in hadoop-tools module

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211385#comment-15211385
 ] 

Hadoop QA commented on HADOOP-12378:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 6m 38s 
{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-tools/hadoop-datajoin in trunk has 2 extant 
Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-tools/hadoop-ant in trunk has 4 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-tools: patch generated 3 new + 36 unchanged - 3 
fixed = 39 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} hadoop-tools/hadoop-datajoin generated 0 new + 0 
unchanged - 2 fixed = 0 total (was 2) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} hadoop-tools/hadoop-ant generated 0 new + 0 
unchanged - 4 fixed = 0 total (was 4) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s 
{color} | {color:green} hadoop-datajoin in the patch passed with JDK v1.8.0_74. 

[jira] [Commented] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211372#comment-15211372
 ] 

Hadoop QA commented on HADOOP-11393:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 3s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 
50s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-tools/hadoop-pipes 
hadoop-mapreduce-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-tools/hadoop-datajoin in trunk has 2 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 11m 
29s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 54s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 7s 
{color} | {color:red} root: patch generated 4 new + 171 unchanged - 8 fixed = 
175 total (was 179) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
10s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site hadoop-tools/hadoop-pipes 
hadoop-mapreduce-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | 

[jira] [Commented] (HADOOP-12948) Maven profile startKdc is broken

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211340#comment-15211340
 ] 

Hadoop QA commented on HADOOP-12948:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-12948 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795296/HADOOP-12948.002.patch
 |
| JIRA Issue | HADOOP-12948 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8919/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12948.001.patch, HADOOP-12948.002.patch
>
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-03-24 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211322#comment-15211322
 ] 

Sunil G commented on HADOOP-12687:
--

Yes [~jiajia]. These failures are still present. INFRA-11150 is raised to 
change YARN precommit build machine hostname so that this issue can be 
permanently fixed. Till then, unfortunately we will have this error from YARN 
pre-commit build. Committers are aware of this, and will consider the same 
accordingly. 

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12961) hadoop-datajoin has findbugs problems

2016-03-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HADOOP-12961.

Resolution: Duplicate

> hadoop-datajoin has findbugs problems
> -
>
> Key: HADOOP-12961
> URL: https://issues.apache.org/jira/browse/HADOOP-12961
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: 
> branch-findbugs-hadoop-tools_hadoop-datajoin-warnings.html
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12961) hadoop-datajoin has findbugs problems

2016-03-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211318#comment-15211318
 ] 

Akira AJISAKA commented on HADOOP-12961:


Thanks Allen for filing this issue, but it is duplicate of HADOOP-12378. I'll 
close it.

> hadoop-datajoin has findbugs problems
> -
>
> Key: HADOOP-12961
> URL: https://issues.apache.org/jira/browse/HADOOP-12961
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: 
> branch-findbugs-hadoop-tools_hadoop-datajoin-warnings.html
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12958) PhantomReference for filesystem statistics can trigger OOM

2016-03-24 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211309#comment-15211309
 ] 

Sangjin Lee commented on HADOOP-12958:
--

[~jlowe], it would be great if you could apply the patch with the problem app 
and see if it fixes the problem. Thanks!

> PhantomReference for filesystem statistics can trigger OOM
> --
>
> Key: HADOOP-12958
> URL: https://issues.apache.org/jira/browse/HADOOP-12958
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Jason Lowe
>Assignee: Sangjin Lee
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12958.01.patch
>
>
> I saw an OOM that appears to have been caused by the phantom references 
> introduced for file system statistics management.  I'll post details in a 
> followup comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12902) JavaDocs for SignerSecretProvider are out-of-date in AuthenticationFilter

2016-03-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211297#comment-15211297
 ] 

Akira AJISAKA commented on HADOOP-12902:


Thanks [~gliptak] for creating the patch.
{code}
  * [#PREFIX#.]signature.secret: when signer.secret.provider is set to
- * "string" or not specified, this is the value for the secret used to sign the
+ * "file" or not specified, this is the value for the secret used to sign the
{code}
Reading the source code, it seems that {{\[#PREFIX#.]signature.secret}} is not 
valid now. When "file" is specified, {{\[#PREFIX#.\]signature.secret.file}} is 
the file path which the secret is loaded from. Would you update the patch to 
document {{\[#PREFIX#.\]signature.secret.file}}?

> JavaDocs for SignerSecretProvider are out-of-date in AuthenticationFilter
> -
>
> Key: HADOOP-12902
> URL: https://issues.apache.org/jira/browse/HADOOP-12902
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Robert Kanter
>Assignee: Gabor Liptak
>  Labels: newbie
> Attachments: HADOOP-12902.1.patch
>
>
> The Javadocs in {{AuthenticationFilter}} say:
> {noformat}
>  * Out of the box it provides 3 signer secret provider implementations:
>  * "string", "random", and "zookeeper"
> {noformat}
> However, the "string" implementation is no longer available because 
> HADOOP-11748 moved it to be a test-only artifact.  This also doesn't mention 
> anything about the file-backed secret provider ({{FileSignerSecretProvider}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-03-24 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211255#comment-15211255
 ] 

Jiajia Li commented on HADOOP-12687:


[~sunilg], from 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8915/testReport/, with 
failures of TestClientRMTokens and TestAMAuthorization, can you help to look at 
it?

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12957) Limit the number of outstanding async calls

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211247#comment-15211247
 ] 

Hadoop QA commented on HADOOP-12957:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 14s 
{color} | {color:red} root-jdk1.8.0_74 with JDK v1.8.0_74 generated 1 new + 737 
unchanged - 1 fixed = 738 total (was 738) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 21s 
{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 733 
unchanged - 1 fixed = 734 total (was 734) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 7s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 3 
new + 85 unchanged - 1 fixed = 88 total (was 86) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 57s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 8s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| 

[jira] [Updated] (HADOOP-12962) KMS key names are correctly encoded when creating key

2016-03-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12962:
---
Attachment: HADOOP-12962.01.patch

Patch 1 to fix the encoding.
Previously when creating URI, the key name string is directly concatenated, 
leaving the special characters unescaped. This patch uses jersey 
[UriBuilder|https://jersey.java.net/apidocs/2.0/jersey/javax/ws/rs/core/UriBuilder.html]
 to do this.
Also trivially refactored the {{getKeyURI}} so that the 2 places currently 
building URI share the method.

> KMS key names are correctly encoded when creating key
> -
>
> Key: HADOOP-12962
> URL: https://issues.apache.org/jira/browse/HADOOP-12962
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12962.01.patch
>
>
> Creating a key that contains special character(s) in its name will result in 
> failure when creating, while that key is in fact created ok on the underlying 
> key provider.
> E.g.
> {noformat}
> $hadoop key create "key name"
> key name has not been created. java.io.IOException: HTTP status [500], 
> exception [java.net.URISyntaxException], message [Illegal character in path 
> at index 11: /v1/key/key name] 
> java.io.IOException: HTTP status [500], exception 
> [java.net.URISyntaxException], message [Illegal character in path at index 
> 11: /v1/key/key name] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:548)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:506)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:672)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:680)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:483)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:515)
> {noformat}
> but
> {noformat}
> $ hadoop key list
> Listing keys for KeyProvider: 
> KMSClientProvider[https://hostname:16000/kms/v1/]
> key name
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12962) KMS key names are correctly encoded when creating key

2016-03-24 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12962:
---
Status: Patch Available  (was: Open)

> KMS key names are correctly encoded when creating key
> -
>
> Key: HADOOP-12962
> URL: https://issues.apache.org/jira/browse/HADOOP-12962
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12962.01.patch
>
>
> Creating a key that contains special character(s) in its name will result in 
> failure when creating, while that key is in fact created ok on the underlying 
> key provider.
> E.g.
> {noformat}
> $hadoop key create "key name"
> key name has not been created. java.io.IOException: HTTP status [500], 
> exception [java.net.URISyntaxException], message [Illegal character in path 
> at index 11: /v1/key/key name] 
> java.io.IOException: HTTP status [500], exception 
> [java.net.URISyntaxException], message [Illegal character in path at index 
> 11: /v1/key/key name] 
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:548)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:506)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:672)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:680)
>   at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:483)
>   at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:515)
> {noformat}
> but
> {noformat}
> $ hadoop key list
> Listing keys for KeyProvider: 
> KMSClientProvider[https://hostname:16000/kms/v1/]
> key name
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12962) KMS key names are correctly encoded when creating key

2016-03-24 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-12962:
--

 Summary: KMS key names are correctly encoded when creating key
 Key: HADOOP-12962
 URL: https://issues.apache.org/jira/browse/HADOOP-12962
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiao Chen
Assignee: Xiao Chen


Creating a key that contains special character(s) in its name will result in 
failure when creating, while that key is in fact created ok on the underlying 
key provider.

E.g.
{noformat}
$hadoop key create "key name"
key name has not been created. java.io.IOException: HTTP status [500], 
exception [java.net.URISyntaxException], message [Illegal character in path at 
index 11: /v1/key/key name] 
java.io.IOException: HTTP status [500], exception 
[java.net.URISyntaxException], message [Illegal character in path at index 11: 
/v1/key/key name] 
at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:159)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:548)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:506)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:672)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:680)
at 
org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:483)
at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:515)

{noformat}

but
{noformat}
$ hadoop key list
Listing keys for KeyProvider: KMSClientProvider[https://hostname:16000/kms/v1/]
key name
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9567) Provide auto-renewal for keytab based logins

2016-03-24 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HADOOP-9567:
--
Attachment: HADOOP-9567.branch-2.7.001.patch

This patch changes UserGroupInformation to launch a background thread to 
relogin when logged in from a keytab.  The patch is against branch-2.7, as that 
is where I have been testing.  I can follow up with a patch against trunk.

This also makes the background thread more resilient to login failures, and 
adds the ability to explicitly stop the background thread for both keytab and 
credential cache based logins.

The frequency of update checks is configured via a new property 
"hadoop.user.ticket.renewal.interval".  Setting this to <= 0 disables the 
background thread from launching.

> Provide auto-renewal for keytab based logins
> 
>
> Key: HADOOP-9567
> URL: https://issues.apache.org/jira/browse/HADOOP-9567
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Gary Helmling
>Priority: Minor
> Attachments: HADOOP-9567.branch-2.7.001.patch
>
>
> We do a renewal for cached tickets (obtained via kinit before using a Hadoop 
> application) but we explicitly seem to avoid doing a renewal for keytab based 
> logins (done from within the client code) when we could do that as well via a 
> similar thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9567) Provide auto-renewal for keytab based logins

2016-03-24 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211150#comment-15211150
 ] 

Gary Helmling commented on HADOOP-9567:
---

bq. This seems like a separate issue to me (it wasn't my intention when filing 
this). Perhaps such a check should only get added to DN login/re-login code, 
cause in the UGI end it'd affect all users (even those who do not desire the 
jitter)?

If you'd like to close out this issue, I'm happy to open my own.  I didn't mean 
to hijack it.

But auto-renew in the background vs. auto-renew in the background with some 
jitter do not seem so different to me.  I also have a patch that I can post.

> Provide auto-renewal for keytab based logins
> 
>
> Key: HADOOP-9567
> URL: https://issues.apache.org/jira/browse/HADOOP-9567
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Gary Helmling
>Priority: Minor
>
> We do a renewal for cached tickets (obtained via kinit before using a Hadoop 
> application) but we explicitly seem to avoid doing a renewal for keytab based 
> logins (done from within the client code) when we could do that as well via a 
> similar thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9567) Provide auto-renewal for keytab based logins

2016-03-24 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211143#comment-15211143
 ] 

Harsh J commented on HADOOP-9567:
-

Thanks [~subrotosanyal], that is indeed what handles it automatically for us 
today. I was thinking more from a perspective where the app does not utilise 
Hadoop or HBase IPCs. Some form of scheduled b/g daemon thread that could do 
periodic renews, for each held LoginContext. There could be locking parts to 
consider around this though, if someone uses getCurrentUser vs. getLoginUser.

I agree that in Hadoop's context we have it already covered, so we could also 
close this out. Any direct users of UGI should self-ensure and call the 
checkTGTAndReloginFromKeytab functionality themselves.

bq. but when you have all datanodes (or other client processes) started at the 
same time, this can lead to a thundering herd effect, where all processes pile 
on the KDC at the same time.

This seems like a separate issue to me (it wasn't my intention when filing 
this). Perhaps such a check should only get added to DN login/re-login code, 
cause in the UGI end it'd affect all users (even those who do not desire the 
jitter)?

> Provide auto-renewal for keytab based logins
> 
>
> Key: HADOOP-9567
> URL: https://issues.apache.org/jira/browse/HADOOP-9567
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Gary Helmling
>Priority: Minor
>
> We do a renewal for cached tickets (obtained via kinit before using a Hadoop 
> application) but we explicitly seem to avoid doing a renewal for keytab based 
> logins (done from within the client code) when we could do that as well via a 
> similar thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12958) PhantomReference for filesystem statistics can trigger OOM

2016-03-24 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12958:
-
Attachment: HADOOP-12958.01.patch

Posted patch v.1.

After looking at this some more, it seems that {{WeakReference}} is a better 
choice than {{PhantomReference}} after all. We are able to do clean-up based on 
the reference queue with either type, but the hold on the referent seems 
tighter (ironically) with the phantom reference in this case. Weak references 
will be cleared out by the garbage collector, unlike the phantom references. We 
never access the actual referent in the clean-up logic, so it is basically a 
matter of substituting the reference type.

I distilled this use case and the {{FileSystem}} code to reproduce this case, 
and the following is the verbose output of that test with the *phantom* 
reference:
{noformat}
$ java -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails 
-XX:+PrintReferenceGC -Xms1600m -Xmx1600m Test
allocating 1 G
creating a phantom reference to this thread
sleeping for 10 seconds
done sleeping
allocating another 1 G
10.528: [GC (Allocation Failure) 10.550: [SoftReference, 0 refs, 0.145 
secs]10.550: [WeakReference, 9 refs, 0.054 secs]10.550: [FinalReference, 16 
refs, 0.157 secs]10.550: [PhantomReference, 1 refs, 0 refs, 0.060 
secs]10.550: [JNI Weak Reference, 0.061 secs][PSYoungGen: 
24576K->416K(477696K)] 1073152K->1049000K(1570304K), 0.0222061 secs] [Times: 
user=0.17 sys=0.00, real=0.02 secs] 
10.550: [Full GC (Ergonomics) 10.550: [SoftReference, 0 refs, 0.250 
secs]10.550: [WeakReference, 2 refs, 0.048 secs]10.550: [FinalReference, 4 
refs, 0.044 secs]10.550: [PhantomReference, 0 refs, 0 refs, 0.049 
secs]10.550: [JNI Weak Reference, 0.031 secs][PSYoungGen: 
416K->0K(477696K)] [ParOldGen: 1048584K->1048857K(1092608K)] 
1049000K->1048857K(1570304K), [Metaspace: 2614K->2614K(1056768K)], 0.0216855 
secs] [Times: user=0.02 sys=0.01, real=0.02 secs] 
10.572: [GC (Allocation Failure) 10.584: [SoftReference, 0 refs, 0.340 
secs]10.584: [WeakReference, 0 refs, 0.049 secs]10.584: [FinalReference, 0 
refs, 0.043 secs]10.584: [PhantomReference, 0 refs, 0 refs, 0.055 
secs]10.584: [JNI Weak Reference, 0.037 secs][PSYoungGen: 0K->0K(477696K)] 
1048857K->1048857K(1570304K), 0.0125265 secs] [Times: user=0.07 sys=0.00, 
real=0.02 secs] 
10.584: [Full GC (Allocation Failure) 10.585: [SoftReference, 37 refs, 
0.157 secs]10.585: [WeakReference, 2 refs, 0.050 secs]10.585: 
[FinalReference, 4 refs, 0.047 secs]10.585: [PhantomReference, 0 refs, 0 
refs, 0.053 secs]10.585: [JNI Weak Reference, 0.032 secs][PSYoungGen: 
0K->0K(477696K)] [ParOldGen: 1048857K->1048845K(1092608K)] 
1048857K->1048845K(1570304K), [Metaspace: 2614K->2614K(1056768K)], 0.0114447 
secs] [Times: user=0.02 sys=0.01, real=0.01 secs] 
phantom reference dequeued
Exception in thread "main" 
java.lang.OutOfMemoryError: Java heap space
at Test$BigBufferContainer.(Test.java:26)
at Test.main(Test.java:18)
Heap
 PSYoungGen  total 477696K, used 20480K [0x00079eb0, 
0x0007c000, 0x0007c000)
  eden space 409600K, 5% used 
[0x00079eb0,0x00079ff001a8,0x0007b7b0)
  from space 68096K, 0% used 
[0x0007bbd8,0x0007bbd8,0x0007c000)
  to   space 68096K, 0% used 
[0x0007b7b0,0x0007b7b0,0x0007bbd8)
 ParOldGen   total 1092608K, used 1048845K [0x00075c00, 
0x00079eb0, 0x00079eb0)
  object space 1092608K, 95% used 
[0x00075c00,0x00079c043718,0x00079eb0)
 Metaspace   used 2646K, capacity 4496K, committed 4864K, reserved 1056768K
  class spaceused 291K, capacity 388K, committed 512K, reserved 1048576K
{noformat}

With the *weak* reference:
{noformat}
$ java -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails 
-XX:+PrintReferenceGC -Xms1600m -Xmx1600m Test
allocating 1 G
creating a weak reference to this thread
sleeping for 10 seconds
done sleeping
allocating another 1 G
10.673: [GC (Allocation Failure) 10.695: [SoftReference, 0 refs, 0.244 
secs]10.695: [WeakReference, 10 refs, 0.064 secs]10.695: [FinalReference, 
16 refs, 0.300 secs]10.695: [PhantomReference, 0 refs, 0 refs, 0.051 
secs]10.695: [JNI Weak Reference, 0.073 secs][PSYoungGen: 
24576K->416K(477696K)] 1073152K->1049000K(1570304K), 0.0229661 secs] [Times: 
user=0.17 sys=0.00, real=0.02 secs] 
10.696: [Full GC (Ergonomics) 10.696: [SoftReference, 0 refs, 0.143 
secs]10.696: [WeakReference, 2 refs, 0.047 secs]10.696: [FinalReference, 0 
refs, 0.042 secs]10.696: [PhantomReference, 0 refs, 0 refs, 0.049 
secs]10.696: [JNI Weak Reference, 0.032 secs][PSYoungGen: 
416K->0K(477696K)] [ParOldGen: 1048584K->280K(1092608K)] 
1049000K->280K(1570304K), [Metaspace: 2614K->2614K(1056768K)], 0.0209466 secs] 

[jira] [Updated] (HADOOP-12958) PhantomReference for filesystem statistics can trigger OOM

2016-03-24 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12958:
-
Status: Patch Available  (was: Open)

> PhantomReference for filesystem statistics can trigger OOM
> --
>
> Key: HADOOP-12958
> URL: https://issues.apache.org/jira/browse/HADOOP-12958
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4, 2.7.3
>Reporter: Jason Lowe
>Assignee: Sangjin Lee
> Fix For: 2.7.3, 2.6.5
>
> Attachments: HADOOP-12958.01.patch
>
>
> I saw an OOM that appears to have been caused by the phantom references 
> introduced for file system statistics management.  I'll post details in a 
> followup comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10965) Incorrect error message by fs -copyFromLocal

2016-03-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-10965:

Fix Version/s: 2.8.0
   Status: Patch Available  (was: In Progress)

> Incorrect error message by fs -copyFromLocal
> 
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-10965.001.patch, HADOOP-10965.002.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10965) Incorrect error message by fs -copyFromLocal

2016-03-24 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-10965:

Attachment: HADOOP-10965.002.patch

Patch 002:
* Fix FS shell command copy and touchz to print the fully qualified path in
  the error message when the path can not be found. It will show the current
  directory and file system which can help users take proper corrective actions.

Test output:
{code}
$ hdfs dfs -touchz f1
touchz: `f1': No such file or directory: `hdfs://nnhost:8020/user/systest/f1'
$ hdfs dfs -put d1 d1
put: `d1': No such file or directory: `hdfs://nnhost:8020/user/systest/d1'
{code}


> Incorrect error message by fs -copyFromLocal
> 
>
> Key: HADOOP-10965
> URL: https://issues.apache.org/jira/browse/HADOOP-10965
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: André Kelpe
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-10965.001.patch, HADOOP-10965.002.patch
>
>
> Whenever I try to copy data from local to a cluster, but forget to create the 
> parent directory first, I get a very confusing error message:
> {code}
> $ whoami
> fs111
> $ hadoop fs -ls  /user
> Found 2 items
> drwxr-xr-x   - fs111   supergroup  0 2014-08-11 20:17 /user/hive
> drwxr-xr-x   - vagrant supergroup  0 2014-08-11 19:15 /user/vagrant
> $ hadoop fs -copyFromLocal data data
> copyFromLocal: `data': No such file or directory
> {code}
> From the error message, you would say that the local "data" directory is not 
> existing, but that is not the case. What is missing is the "/user/fs111" 
> directory on HDFS. After I created it, the copyFromLocal command works fine.
> I believe the error message is confusing and should at least be fixed. What 
> would be even better, if hadoop could restore the old behaviour in 1.x, where 
> copyFromLocal would just create the directories, if they are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2016-03-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211059#comment-15211059
 ] 

Allen Wittenauer edited comment on HADOOP-11393 at 3/24/16 10:25 PM:
-

A local run w/out unit tests resulted in the these failures:

|| Vote || Subsystem || Runtime || Comment ||
|  -1  |checkstyle  |  1m 21s| root: patch generated 4 new + 172 | 
|  ||| unchanged - 8 fixed = 176 total (was 
180) |
|  -1  |whitespace  |  0m 0s | The patch has 2 line(s) that end in |
|  ||| whitespace. Use git apply |
|  ||| --whitespace=fix. |

Checkstyle issues are all line length problems.  I suspect the 8 fixed are also 
line length as well.

I'll fire off unit tests tonight.


was (Author: aw):
A local run w/out unit tests resulted in the these failures:

| Vote |  Subsystem |  Runtime   | Comment

|  -1  |checkstyle  |  1m 21s| root: patch generated 4 new + 172 | 
|  ||| unchanged - 8 fixed = 176 total (was 
180) |
|  -1  |whitespace  |  0m 0s | The patch has 2 line(s) that end in |
|  ||| whitespace. Use git apply |
|  ||| --whitespace=fix. |

Checkstyle issues are all line length problems.  I suspect the 8 fixed are also 
line length as well.

I'll fire off unit tests tonight.

> Revert HADOOP_PREFIX, go back to HADOOP_HOME
> 
>
> Key: HADOOP-11393
> URL: https://issues.apache.org/jira/browse/HADOOP-11393
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11393-00.patch, HADOOP-11393.01.patch, 
> HADOOP-11393.02.patch
>
>
> Today, Windows and parts of the Hadoop source code still use HADOOP_HOME.  
> The switch to HADOOP_PREFIX back in 0.21 or so didn't really accomplish what 
> it was intended to do and only helped confuse the situation.
> _HOME is a much more standard suffix and is, in fact, used for everything in 
> Hadoop except for the top level project home.  I think it would be beneficial 
> to use HADOOP_HOME in the shell code as the Official(tm) variable, still 
> honoring HADOOP_PREFIX if it is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2016-03-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211059#comment-15211059
 ] 

Allen Wittenauer edited comment on HADOOP-11393 at 3/24/16 10:25 PM:
-

A local run w/out unit tests resulted in the these failures:

|| Vote || Subsystem || Runtime || Comment ||
|  -1  |checkstyle  |  1m 21s| root: patch generated 4 new + 172 
unchanged - 8 fixed = 176 total (was 180) |
|  -1  |whitespace  |  0m 0s | The patch has 2 line(s) that end in 
whitespace. Use git apply  --whitespace=fix. |

Checkstyle issues are all line length problems.  I suspect the 8 fixed are also 
line length as well.

I'll fire off unit tests tonight.


was (Author: aw):
A local run w/out unit tests resulted in the these failures:

|| Vote || Subsystem || Runtime || Comment ||
|  -1  |checkstyle  |  1m 21s| root: patch generated 4 new + 172 | 
|  ||| unchanged - 8 fixed = 176 total (was 
180) |
|  -1  |whitespace  |  0m 0s | The patch has 2 line(s) that end in |
|  ||| whitespace. Use git apply |
|  ||| --whitespace=fix. |

Checkstyle issues are all line length problems.  I suspect the 8 fixed are also 
line length as well.

I'll fire off unit tests tonight.

> Revert HADOOP_PREFIX, go back to HADOOP_HOME
> 
>
> Key: HADOOP-11393
> URL: https://issues.apache.org/jira/browse/HADOOP-11393
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11393-00.patch, HADOOP-11393.01.patch, 
> HADOOP-11393.02.patch
>
>
> Today, Windows and parts of the Hadoop source code still use HADOOP_HOME.  
> The switch to HADOOP_PREFIX back in 0.21 or so didn't really accomplish what 
> it was intended to do and only helped confuse the situation.
> _HOME is a much more standard suffix and is, in fact, used for everything in 
> Hadoop except for the top level project home.  I think it would be beneficial 
> to use HADOOP_HOME in the shell code as the Official(tm) variable, still 
> honoring HADOOP_PREFIX if it is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2016-03-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211059#comment-15211059
 ] 

Allen Wittenauer commented on HADOOP-11393:
---

A local run w/out unit tests resulted in the these failures:

| Vote |  Subsystem |  Runtime   | Comment

|  -1  |checkstyle  |  1m 21s| root: patch generated 4 new + 172 | 
|  ||| unchanged - 8 fixed = 176 total (was 
180) |
|  -1  |whitespace  |  0m 0s | The patch has 2 line(s) that end in |
|  ||| whitespace. Use git apply |
|  ||| --whitespace=fix. |

Checkstyle issues are all line length problems.  I suspect the 8 fixed are also 
line length as well.

I'll fire off unit tests tonight.

> Revert HADOOP_PREFIX, go back to HADOOP_HOME
> 
>
> Key: HADOOP-11393
> URL: https://issues.apache.org/jira/browse/HADOOP-11393
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11393-00.patch, HADOOP-11393.01.patch, 
> HADOOP-11393.02.patch
>
>
> Today, Windows and parts of the Hadoop source code still use HADOOP_HOME.  
> The switch to HADOOP_PREFIX back in 0.21 or so didn't really accomplish what 
> it was intended to do and only helped confuse the situation.
> _HOME is a much more standard suffix and is, in fact, used for everything in 
> Hadoop except for the top level project home.  I think it would be beneficial 
> to use HADOOP_HOME in the shell code as the Official(tm) variable, still 
> honoring HADOOP_PREFIX if it is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12951) Improve documentation on KMS ACLs and delegation tokens

2016-03-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211039#comment-15211039
 ] 

Andrew Wang commented on HADOOP-12951:
--

Sure, that sounds good to me. We can address auth doc improvements in this JIRA 
too if you want.

> Improve documentation on KMS ACLs and delegation tokens
> ---
>
> Key: HADOOP-12951
> URL: https://issues.apache.org/jira/browse/HADOOP-12951
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12951.01.patch
>
>
> [~andrew.wang] suggested that the current KMS ACL page is not very 
> user-focused, and hard to come by without reading the code.
> I read the document (and the code), and I agree. So this jira puts more 
> documentation to explain the current implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12957) Limit the number of outstanding async calls

2016-03-24 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211030#comment-15211030
 ] 

Xiaobing Zhou commented on HADOOP-12957:


The unit tests will be added in upcoming patches.

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12957) Limit the number of outstanding async calls

2016-03-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12957:
---
Attachment: HADOOP-12957-combo.000.patch

I also post the combo patch v000 that contains HADOOP-12957 and HADOOP-12909 
for reference and easy review.

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12957) Limit the number of outstanding async calls

2016-03-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12957:
---
Status: Patch Available  (was: Open)

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12957-HADOOP-12909.000.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12957) Limit the number of outstanding async calls

2016-03-24 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211015#comment-15211015
 ] 

Xiaobing Zhou commented on HADOOP-12957:


I posted patch v000 for review. There could be two changes:
1. make asyncCallCounter as a thread local so it makes each thread being 
subject to the threshold. But there might be a couple of caller threads doing 
async calls, which exacerbates buffer usage.
2. make asyncCallCounter configurable.

I'd like to know reviewers' comments on these. Thanks. 

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12957-HADOOP-12909.000.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12957) Limit the number of outstanding async calls

2016-03-24 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12957:
---
Attachment: HADOOP-12957-HADOOP-12909.000.patch

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12957-HADOOP-12909.000.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12948) Maven profile startKdc is broken

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12948:
-
Status: Patch Available  (was: Open)

> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12948.001.patch, HADOOP-12948.002.patch
>
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12948) Maven profile startKdc is broken

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12948:
-
Component/s: test

> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12948.001.patch, HADOOP-12948.002.patch
>
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12958) PhantomReference for filesystem statistics can trigger OOM

2016-03-24 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210988#comment-15210988
 ] 

Sangjin Lee commented on HADOOP-12958:
--

OK, I am now able to reproduce this with a simple test and a possible solution. 
Let me update it soon.

> PhantomReference for filesystem statistics can trigger OOM
> --
>
> Key: HADOOP-12958
> URL: https://issues.apache.org/jira/browse/HADOOP-12958
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Jason Lowe
>Assignee: Sangjin Lee
> Fix For: 2.7.3, 2.6.5
>
>
> I saw an OOM that appears to have been caused by the phantom references 
> introduced for file system statistics management.  I'll post details in a 
> followup comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12958) PhantomReference for filesystem statistics can trigger OOM

2016-03-24 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee reassigned HADOOP-12958:


Assignee: Sangjin Lee

> PhantomReference for filesystem statistics can trigger OOM
> --
>
> Key: HADOOP-12958
> URL: https://issues.apache.org/jira/browse/HADOOP-12958
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Jason Lowe
>Assignee: Sangjin Lee
> Fix For: 2.7.3, 2.6.5
>
>
> I saw an OOM that appears to have been caused by the phantom references 
> introduced for file system statistics management.  I'll post details in a 
> followup comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12961) hadoop-datajoin has findbugs problems

2016-03-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12961:
--
Attachment: branch-findbugs-hadoop-tools_hadoop-datajoin-warnings.html

> hadoop-datajoin has findbugs problems
> -
>
> Key: HADOOP-12961
> URL: https://issues.apache.org/jira/browse/HADOOP-12961
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: 
> branch-findbugs-hadoop-tools_hadoop-datajoin-warnings.html
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12961) hadoop-datajoin has findbugs problems

2016-03-24 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12961:
-

 Summary: hadoop-datajoin has findbugs problems
 Key: HADOOP-12961
 URL: https://issues.apache.org/jira/browse/HADOOP-12961
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: branch-findbugs-hadoop-tools_hadoop-datajoin-warnings.html





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12948) Maven profile startKdc is broken

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210987#comment-15210987
 ] 

Wei-Chiu Chuang commented on HADOOP-12948:
--

Filed a corresponding jira HDFS-10210 to remove HDFS side of the code.

> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12948.001.patch, HADOOP-12948.002.patch
>
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12948) Maven profile startKdc is broken

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12948:
-
Attachment: HADOOP-12948.002.patch

Updated pom.xml to remove startKdc profile, and removed users.ldif which was 
added in HADOOP-8078.

> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12948.001.patch, HADOOP-12948.002.patch
>
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12948) Maven profile startKdc is broken

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12948:
-
Attachment: HADOOP-12948.001.patch

Rev01: Removed the files added in HADOOP-8078.

> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12948.001.patch
>
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12948) Maven profile startKdc is broken

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210972#comment-15210972
 ] 

Wei-Chiu Chuang commented on HADOOP-12948:
--

After studying the code, I think {{TestUGIWithSecurityOn}} should be removed. 
Its {{testLogin}} duplicates the tests in {{TestKMS}}, and its 
{{testGetUGIFromKerberosSubject}} duplicates {{TestMiniKdc#testKerberosLogin}}.



> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12948) Maven profile startKdc is broken

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-12948:


Assignee: Wei-Chiu Chuang

> Maven profile startKdc is broken
> 
>
> Key: HADOOP-12948
> URL: https://issues.apache.org/jira/browse/HADOOP-12948
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Mac OS
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> {noformat}
> mvn install -Dtest=TestUGIWithSecurityOn -DstartKdc=true
> main:
>  [exec] xargs: illegal option -- -
>  [exec] usage: xargs [-0opt] [-E eofstr] [-I replstr [-R replacements]] 
> [-J replstr]
>  [exec]  [-L number] [-n number [-x]] [-P maxprocs] [-s size]
>  [exec]  [utility [argument ...]]
>  [exec] Result: 1
>   [get] Getting: 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>   [get] To: 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
>   [get] Error getting 
> http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz
>  to 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads/apacheds-1.5.7.tar.gz
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 8.448 s
> [INFO] Finished at: 2016-03-21T10:00:56-07:00
> [INFO] Final Memory: 31M/439M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (kdc) on project 
> hadoop-common: An Ant BuildException has occured: 
> java.net.UnknownHostException: newverhost.com
> [ERROR] around Ant part ... dest="/Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/test-classes/kdc/downloads"
>  skipexisting="true" verbose="true" 
> src="http://newverhost.com/pub//directory/apacheds/unstable/1.5/1.5.7/apacheds-1.5.7.tar.gz"/>...
>  @ 7:244 in 
> /Users/weichiu/sandbox/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> I'm using Mac so part of the reason might be my operating system (even though 
> the pom.xml stated it supported Mac), but the major problem is that it 
> attempted to download apacheds from newverhost.com, which does not seem exist 
> any more.
> These tests were implemented in HADOOP-8078, and must have -DstartKdc=true in 
> order to run them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12922) Separate Scheduler from FCQ

2016-03-24 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HADOOP-12922.
-
Resolution: Duplicate

This is covered in HADOOP-12916.

> Separate Scheduler from FCQ
> ---
>
> Key: HADOOP-12922
> URL: https://issues.apache.org/jira/browse/HADOOP-12922
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> Currently, {{CallQueueManager}} has a reference to FCQ which owns scheduler, 
> call queue  and multiplexer. It will make it easier to develop new call 
> queues if scheduler and multiplexer can be made pluggable for reuse. 
>   
> This ticket is opened to define a separate scheduler interface so that 
> 1) different schedulers can be plugged in while using FCQ.
> 2) scheduler related metrics can be easily added for monitoring and trouble 
> shooting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12958) PhantomReference for filesystem statistics can trigger OOM

2016-03-24 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210928#comment-15210928
 ] 

Jason Lowe commented on HADOOP-12958:
-

Thanks for chiming in, [~sjlee0].  I ran the job again with the requested JVM 
options, and here's the last few lines in the GC log.  The task ran with JDK 
1.8.0_60-b27 on RHEL 6.5:
{noformat}
2016-03-24T20:10:36.476+: 107.002: [GC (Allocation Failure) 
2016-03-24T20:10:36.481+: 107.007: [SoftReference, 0 refs, 0.264 
secs]2016-03-24T20:10:36.481+: 107.007: [WeakReference, 0 refs, 0.130 
secs]2016-03-24T20:10:36.481+: 107.007: [FinalReference, 0 refs, 0.127 
secs]2016-03-24T20:10:36.481+: 107.007: [PhantomReference, 0 refs, 0 refs, 
0.140 secs]2016-03-24T20:10:36.481+: 107.007: [JNI Weak Reference, 
0.170 secs][PSYoungGen: 115296K->96K(115712K)] 603566K->488366K(734720K), 
0.0061637 secs] [Times: user=0.02 sys=0.00, real=0.00 secs] 
2016-03-24T20:10:36.960+: 107.486: [GC (Allocation Failure) 
2016-03-24T20:10:36.965+: 107.491: [SoftReference, 0 refs, 0.472 
secs]2016-03-24T20:10:36.965+: 107.491: [WeakReference, 0 refs, 0.363 
secs]2016-03-24T20:10:36.965+: 107.491: [FinalReference, 0 refs, 0.108 
secs]2016-03-24T20:10:36.965+: 107.491: [PhantomReference, 0 refs, 0 refs, 
0.083 secs]2016-03-24T20:10:36.965+: 107.491: [JNI Weak Reference, 
0.091 secs][PSYoungGen: 115296K->96K(115712K)] 603566K->488366K(734720K), 
0.0056482 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 
2016-03-24T20:10:38.447+: 108.973: [GC (Allocation Failure) 
2016-03-24T20:10:38.452+: 108.978: [SoftReference, 0 refs, 0.338 
secs]2016-03-24T20:10:38.452+: 108.978: [WeakReference, 0 refs, 0.123 
secs]2016-03-24T20:10:38.452+: 108.978: [FinalReference, 3 refs, 0.158 
secs]2016-03-24T20:10:38.452+: 108.978: [PhantomReference, 0 refs, 0 refs, 
0.113 secs]2016-03-24T20:10:38.452+: 108.978: [JNI Weak Reference, 
0.148 secs][PSYoungGen: 115296K->256K(115712K)] 603566K->488526K(734720K), 
0.0058352 secs] [Times: user=0.02 sys=0.00, real=0.00 secs] 
2016-03-24T20:10:40.917+: 111.443: [GC (Allocation Failure) 
2016-03-24T20:10:40.922+: 111.448: [SoftReference, 0 refs, 0.209 
secs]2016-03-24T20:10:40.922+: 111.448: [WeakReference, 159 refs, 0.212 
secs]2016-03-24T20:10:40.922+: 111.448: [FinalReference, 269 refs, 
0.0003309 secs]2016-03-24T20:10:40.923+: 111.449: [PhantomReference, 0 
refs, 6 refs, 0.108 secs]2016-03-24T20:10:40.923+: 111.449: [JNI Weak 
Reference, 0.108 secs][PSYoungGen: 115456K->480K(115712K)] 
603726K->489305K(734720K), 0.0061493 secs] [Times: user=0.02 sys=0.00, 
real=0.01 secs] 
2016-03-24T20:10:42.797+: 113.323: [GC (Allocation Failure) 
2016-03-24T20:10:42.802+: 113.328: [SoftReference, 0 refs, 0.230 
secs]2016-03-24T20:10:42.802+: 113.328: [WeakReference, 231 refs, 0.225 
secs]2016-03-24T20:10:42.802+: 113.328: [FinalReference, 350 refs, 
0.0004041 secs]2016-03-24T20:10:42.803+: 113.329: [PhantomReference, 0 
refs, 6 refs, 0.097 secs]2016-03-24T20:10:42.803+: 113.329: [JNI Weak 
Reference, 0.106 secs][PSYoungGen: 115680K->480K(114688K)] 
604505K->489537K(733696K), 0.0062425 secs] [Times: user=0.02 sys=0.00, 
real=0.00 secs] 
2016-03-24T20:10:44.626+: 115.151: [GC (Allocation Failure) 
2016-03-24T20:10:44.630+: 115.156: [SoftReference, 0 refs, 0.247 
secs]2016-03-24T20:10:44.630+: 115.156: [WeakReference, 230 refs, 0.234 
secs]2016-03-24T20:10:44.630+: 115.156: [FinalReference, 350 refs, 
0.0003741 secs]2016-03-24T20:10:44.631+: 115.157: [PhantomReference, 0 
refs, 6 refs, 0.095 secs]2016-03-24T20:10:44.631+: 115.157: [JNI Weak 
Reference, 0.104 secs][PSYoungGen: 114656K->896K(115200K)] 
603713K->489961K(734208K), 0.0059075 secs] [Times: user=0.02 sys=0.00, 
real=0.00 secs] 
2016-03-24T20:10:46.449+: 116.975: [GC (Allocation Failure) 
2016-03-24T20:10:46.455+: 116.981: [SoftReference, 0 refs, 0.285 
secs]2016-03-24T20:10:46.455+: 116.981: [WeakReference, 235 refs, 0.293 
secs]2016-03-24T20:10:46.455+: 116.981: [FinalReference, 356 refs, 
0.0004249 secs]2016-03-24T20:10:46.455+: 116.981: [PhantomReference, 0 
refs, 7 refs, 0.106 secs]2016-03-24T20:10:46.455+: 116.981: [JNI Weak 
Reference, 0.105 secs][PSYoungGen: 115072K->864K(114176K)] 
604137K->489929K(733184K), 0.0069392 secs] [Times: user=0.02 sys=0.00, 
real=0.01 secs] 
2016-03-24T20:10:48.741+: 119.267: [GC (Allocation Failure) 
2016-03-24T20:10:48.749+: 119.275: [SoftReference, 0 refs, 0.592 
secs]2016-03-24T20:10:48.749+: 119.275: [WeakReference, 639 refs, 0.830 
secs]2016-03-24T20:10:48.749+: 119.275: [FinalReference, 177 refs, 
0.0002006 secs]2016-03-24T20:10:48.749+: 119.275: [PhantomReference, 0 
refs, 12 refs, 

[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-03-24 Thread Tony Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210903#comment-15210903
 ] 

Tony Wu commented on HADOOP-12666:
--

Hi [~vishwajeet.dusane], 

Thank you for posting the semantics document, meeting minutes and a new patch. 
It was really helpful.

*General comments:*
# Regarding the semantics document. It will be great if you can also include 
more information on how ADL backend can "lock" a file for write. From the doc 
and our meeting discussion, there seems to be 2 ways (please confirm):
*# File lease (which you included in the doc). Used by {{createNonRecursive()}}
*# Maintain connection to the backend (you mentioned during meeting). Used by 
{{append()}}
*** In this case do we assume ADL backend tracks which file is opened for write 
by keeping track of HTTP connections to the file?
# In {{hadoop-tools/hadoop-azure-datalake/src/site/markdown/index.md}}. You 
mentioned:
{quote}User and group information returned as ListStatus and GetFileStatus is 
in form of GUID associated in Azure Active Directory.{quote}
There are applications which verifies the file ownership and it will fail 
because of this. Also I believe this means commands like {{hdfs dfs -ls }} will return GUID as user & group. It may not be that readable. Can you 
comment on how users should handle this?
** I see there is a {{adl.debug.override.localuserasfileowner}} config which 
will override the user with local client user and group with "hdfs". But this 
workaround is probably not for actual usage.

*Specific comments:*

1. Can you explain why {{flushAsync()}} is used in {{BatchAppendOutputStream}}? 
It seems like {{flushAsync()}} is only used in a particular case where there is 
some data left in the buffer from previous writes and that combined with the 
current write will cross the buffer boundary. I'm not sure why this particular 
flush has to be async.

Consider the following case:
# {{BatchAppendOutputStream#flushAsync()}} returns. {{flushAsync()}} sends the 
sync job to thread and sets offset to 0:
{code:java}
private void flushAsync() throws IOException {
  if (offset > 0) {
waitForOutstandingFlush();

// Submit the new flush task to the executor
flushTask = EXECUTOR.submit(new CommitTask(data, offset, eof));

// Get a new internal buffer for the user to write
data = getBuffer();
offset = 0;
  }
}
{code}
# BatchAppendOutputStream#write() returns.
# Client closes outputStream.
# BatchAppendOutputStream#close() checks to see if there's anything to flush by 
checking offset. And in this case there is nothing to flush because offset is 
set to 0 earlier.
{code:java}
  boolean flushedSomething = false;
  if (hadError) {
// No point proceeding further since the error has occurred and
// stream would be required to upload again.
return;
  } else {
flushedSomething = offset > 0;
flush();
  }
{code}
# {{BatchAppendOutputStream#close()}} does not wait for the async flush job to 
complete. After this point if {{flushAsync()}} hit any error, this error will 
be lost and client will not be aware of it.
# If client then starts to write (append) to the same file with a new stream, 
this new write also will not wait for the previous async job to complete 
because {{flushTask}} is internal to {{BatchAppendOutputStream}}. It might be 
possible for the 2 writes to reach the backend in reverse order.

IMHO if this async flush is necessary then {{EXECUTOR}} should be created 
inside {{BatchAppendOutputStream}} and shutdown when the stream is closed. 
Currently {{EXECUTOR}} lives in {{PrivateAzureDataLakeFileSystem}}.

2. {{EXECUTOR}} in {{PrivateAzureDataLakeFileSystem}} is not shutdown properly.

3. Stream is closed check is missing in {{BatchAppendOutputStream}}. This check 
is present for {{BatchByteArrayInputStream}}.


> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Create_Read_Hadoop_Adl_Store_Semantics.pdf, 
> HADOOP-12666-002.patch, HADOOP-12666-003.patch, HADOOP-12666-004.patch, 
> HADOOP-12666-005.patch, HADOOP-12666-006.patch, HADOOP-12666-007.patch, 
> HADOOP-12666-008.patch, HADOOP-12666-009.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications 

[jira] [Commented] (HADOOP-12945) Support StartTLS encryption for LDAP group names mapping

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210893#comment-15210893
 ] 

Hadoop QA commented on HADOOP-12945:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 42s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 41s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 1s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.net.TestDNS |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-12958) PhantomReference for filesystem statistics can trigger OOM

2016-03-24 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210861#comment-15210861
 ] 

Colin Patrick McCabe commented on HADOOP-12958:
---

Hi [~jlowe],

Sorry if this is a silly question, but do you think Tez could call 
{{System#gc}} several times when reusing containers?  Would that help this 
situation?

> PhantomReference for filesystem statistics can trigger OOM
> --
>
> Key: HADOOP-12958
> URL: https://issues.apache.org/jira/browse/HADOOP-12958
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Jason Lowe
> Fix For: 2.7.3, 2.6.5
>
>
> I saw an OOM that appears to have been caused by the phantom references 
> introduced for file system statistics management.  I'll post details in a 
> followup comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12958) PhantomReference for filesystem statistics can trigger OOM

2016-03-24 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210852#comment-15210852
 ] 

Sangjin Lee commented on HADOOP-12958:
--

Thanks for reporting this [~jlowe]. This is rather surprising, and we certainly 
haven't encountered this (we've been running this code for a while now).

The javadoc for how a phantom reference behaves says the following:
{quote}
Unlike soft and weak references, phantom references are not automatically 
cleared by the garbage collector as they are enqueued. An object that is 
reachable via phantom references will remain so until all such references are 
cleared or themselves become unreachable.
{quote}

In our case, the phantom reference itself becomes unreachable as a result of 
the clean-up. Thus, there should be only one enqueuing/dequeuing, and at the 
end of the clean-up the reference and the referent should be claimable.

The javadoc says phantom references are "not automatically" cleared by the 
garbage collector, but when I tested smaller test scenarios on several JVMs, 
the full GC itself seems to clear out the phantom reference before the 
dequeuing runs (I'd be happy to post the test code if that would help).

Could you please try the following? Would it be possible to reproduce this with 
more tracing? The flags are
{noformat}
-verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintReferenceGC
{noformat}
The last one will report how the phantom references (and others) are handled by 
each GC. Also, the JVM/OS version with which you saw this would be helpful. 
Thanks!

> PhantomReference for filesystem statistics can trigger OOM
> --
>
> Key: HADOOP-12958
> URL: https://issues.apache.org/jira/browse/HADOOP-12958
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Jason Lowe
> Fix For: 2.7.3, 2.6.5
>
>
> I saw an OOM that appears to have been caused by the phantom references 
> introduced for file system statistics management.  I'll post details in a 
> followup comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210842#comment-15210842
 ] 

Hadoop QA commented on HADOOP-8145:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 28s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 45s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 2s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.net.TestDNS |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Failed junit tests | hadoop.net.TestDNS |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795246/HADOOP-8145.004.patch 
|
| JIRA Issue | HADOOP-8145 |
| Optional Tests |  asflicense  

[jira] [Updated] (HADOOP-12082) Support multiple authentication schemes via AuthenticationFilter

2016-03-24 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HADOOP-12082:
--
Attachment: HADOOP-12082.patch

Thanks for the review [~benoyantony]

I have addressed all the review comments. Also the latest patch follows the 
naming convention (I hope). Please take a look and let me know your feedback. 

> Support multiple authentication schemes via AuthenticationFilter
> 
>
> Key: HADOOP-12082
> URL: https://issues.apache.org/jira/browse/HADOOP-12082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
> Attachments: HADOOP-12082.patch, hadoop-ldap-auth-v2.patch, 
> hadoop-ldap-auth-v3.patch, hadoop-ldap-auth-v4.patch, 
> hadoop-ldap-auth-v5.patch, hadoop-ldap-auth-v6.patch, hadoop-ldap.patch, 
> multi-scheme-auth-support-poc.patch
>
>
> The requirement is to support LDAP based authentication scheme via Hadoop 
> AuthenticationFilter. HADOOP-9054 added a support to plug-in custom 
> authentication scheme (in addition to Kerberos) via 
> AltKerberosAuthenticationHandler class. But it is based on selecting the 
> authentication mechanism based on User-Agent HTTP header which does not 
> conform to HTTP protocol semantics.
> As per [RFC-2616|http://www.w3.org/Protocols/rfc2616/rfc2616.html]
> - HTTP protocol provides a simple challenge-response authentication mechanism 
> that can be used by a server to challenge a client request and by a client to 
> provide the necessary authentication information. 
> - This mechanism is initiated by server sending the 401 (Authenticate) 
> response with ‘WWW-Authenticate’ header which includes at least one challenge 
> that indicates the authentication scheme(s) and parameters applicable to the 
> Request-URI. 
> - In case server supports multiple authentication schemes, it may return 
> multiple challenges with a 401 (Authenticate) response, and each challenge 
> may use a different auth-scheme. 
> - A user agent MUST choose to use the strongest auth-scheme it understands 
> and request credentials from the user based upon that challenge.
> The existing Hadoop authentication filter implementation supports Kerberos 
> authentication scheme and uses ‘Negotiate’ as the challenge as part of 
> ‘WWW-Authenticate’ response header. As per the following documentation, 
> ‘Negotiate’ challenge scheme is only applicable to Kerberos (and Windows 
> NTLM) authentication schemes.
> [SPNEGO-based Kerberos and NTLM HTTP 
> Authentication|http://tools.ietf.org/html/rfc4559]
> [Understanding HTTP 
> Authentication|https://msdn.microsoft.com/en-us/library/ms789031%28v=vs.110%29.aspx]
> On the other hand for LDAP authentication, typically ‘Basic’ authentication 
> scheme is used (Note TLS is mandatory with Basic authentication scheme).
> http://httpd.apache.org/docs/trunk/mod/mod_authnz_ldap.html
> Hence for this feature, the idea would be to provide a custom implementation 
> of Hadoop AuthenticationHandler and Authenticator interfaces which would 
> support both schemes - Kerberos (via Negotiate auth challenge) and LDAP (via 
> Basic auth challenge). During the authentication phase, it would send both 
> the challenges and let client pick the appropriate one. If client responds 
> with an ‘Authorization’ header tagged with ‘Negotiate’ - it will use Kerberos 
> authentication. If client responds with an ‘Authorization’ header tagged with 
> ‘Basic’ - it will use LDAP authentication.
> Note - some HTTP clients (e.g. curl or Apache Http Java client) need to be 
> configured to use one scheme over the other e.g.
> - curl tool supports option to use either Kerberos (via --negotiate flag) or 
> username/password based authentication (via --basic and -u flags). 
> - Apache HttpClient library can be configured to use specific authentication 
> scheme.
> http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html
> Typically web browsers automatically choose an authentication scheme based on 
> a notion of “strength” of security. e.g. take a look at the [design of Chrome 
> browser for HTTP 
> authentication|https://www.chromium.org/developers/design-documents/http-authentication]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10850) KerberosAuthenticator should not do the SPNEGO handshake

2016-03-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10850:

Target Version/s: 2.9.0  (was: 2.8.0)

> KerberosAuthenticator should not do the SPNEGO handshake
> 
>
> Key: HADOOP-10850
> URL: https://issues.apache.org/jira/browse/HADOOP-10850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10850.patch, testFailures.png, testorder.patch
>
>
> As mentioned in HADOOP-10453, the JDK automatically does a SPNEGO handshake 
> when opening a connection with a URL within a Kerberos login context, there 
> is no need to do the SPNEGO handshake in the {{KerberosAuthenticator}}, 
> simply extract the auth token (hadoop-auth cookie) and do the fallback if 
> necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12945) Support StartTLS encryption for LDAP group names mapping

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12945:
-
Attachment: HADOOP-12945.002.patch

Rev02: fixed checkstyle warning.

> Support StartTLS encryption for LDAP group names mapping
> 
>
> Key: HADOOP-12945
> URL: https://issues.apache.org/jira/browse/HADOOP-12945
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: LDAP, SSL
> Attachments: HADOOP-12945.001.patch, HADOOP-12945.002.patch
>
>
> The current LDAP group name resolution supports LDAP over SSL (LDAPS) 
> encryption. However, LDAPS is considered deprecated. A better encryption 
> protocol is LDAP Start TLS extension (RFC-2830).
> I added the StartTLS support using JNDI API, and have verified that it works 
> against my Apache Directory Service.
> To enable LDAPS, set hadoop.security.group.mapping.ldap.ssl to true. To 
> enable StartTLS, set hadoop.security.group.mapping.ldap.starttls to true. If 
> both properties are true, this implementation will choose StartTLS over 
> LDAPS, as the latter is considered deprecated.
> If StartTLS is chosen, no alternative port is necessary; otherwise, LDAPS 
> often uses a different port (normally 636) than LDAP port (normally 389). By 
> default, StartTLS performs DEFAULT host name verification. But this can be 
> changed via hadoop.security.group.mapping.ldap.starttls.hostnameverifier. To 
> disable host name verifier, set this value to ALLOW_ALL. Other valid values 
> are: STRICT, STRICT_IE6, and DEFAULT_AND_LOCALHOST. (See 
> {{SSLHostnameVerifier.java}} for more details)
> This patch will conflict with HADOOP-12862 (LDAP Group Mapping over SSL can 
> not specify trust store) (status: patch available) because of the code 
> proximity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-8145:

Attachment: HADOOP-8145.004.patch

Rev04: added asf license text.

> Automate testing of LdapGroupsMapping against ApacheDS
> --
>
> Key: HADOOP-8145
> URL: https://issues.apache.org/jira/browse/HADOOP-8145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security, test
>Reporter: Jonathan Natkins
>Assignee: Wei-Chiu Chuang
>  Labels: test
> Attachments: HADOOP-8145.001.patch, HADOOP-8145.002.patch, 
> HADOOP-8145.003.patch, HADOOP-8145.004.patch
>
>
> HADOOP-8078 introduced an ApacheDS system to the automated tests, and the 
> LdapGroupsMapping could benefit from automated testing against that DS 
> instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-8145:

Status: Open  (was: Patch Available)

> Automate testing of LdapGroupsMapping against ApacheDS
> --
>
> Key: HADOOP-8145
> URL: https://issues.apache.org/jira/browse/HADOOP-8145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security, test
>Reporter: Jonathan Natkins
>Assignee: Wei-Chiu Chuang
>  Labels: test
> Attachments: HADOOP-8145.001.patch, HADOOP-8145.002.patch, 
> HADOOP-8145.003.patch, HADOOP-8145.004.patch
>
>
> HADOOP-8078 introduced an ApacheDS system to the automated tests, and the 
> LdapGroupsMapping could benefit from automated testing against that DS 
> instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-8145:

Status: Patch Available  (was: Open)

> Automate testing of LdapGroupsMapping against ApacheDS
> --
>
> Key: HADOOP-8145
> URL: https://issues.apache.org/jira/browse/HADOOP-8145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security, test
>Reporter: Jonathan Natkins
>Assignee: Wei-Chiu Chuang
>  Labels: test
> Attachments: HADOOP-8145.001.patch, HADOOP-8145.002.patch, 
> HADOOP-8145.003.patch, HADOOP-8145.004.patch
>
>
> HADOOP-8078 introduced an ApacheDS system to the automated tests, and the 
> LdapGroupsMapping could benefit from automated testing against that DS 
> instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8145) Automate testing of LdapGroupsMapping against ApacheDS

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-8145:

Attachment: HADOOP-8145.003.patch

Rev03: Changed test class name to TestLdapGroupsMappingAgainstADS; fixed a few 
code style issues.

> Automate testing of LdapGroupsMapping against ApacheDS
> --
>
> Key: HADOOP-8145
> URL: https://issues.apache.org/jira/browse/HADOOP-8145
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security, test
>Reporter: Jonathan Natkins
>Assignee: Wei-Chiu Chuang
>  Labels: test
> Attachments: HADOOP-8145.001.patch, HADOOP-8145.002.patch, 
> HADOOP-8145.003.patch
>
>
> HADOOP-8078 introduced an ApacheDS system to the automated tests, and the 
> LdapGroupsMapping could benefit from automated testing against that DS 
> instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12701) Run checkstyle on test source files

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210544#comment-15210544
 ] 

Hadoop QA commented on HADOOP-12701:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 23s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 45s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 31s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 112m 49s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781478/HADOOP-12701.001.patch
 |
| JIRA Issue | HADOOP-12701 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 6f971d3cc8e0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 19b645c |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_74 

[jira] [Commented] (HADOOP-9363) AuthenticatedURL will NPE if server closes connection

2016-03-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210530#comment-15210530
 ] 

Steve Loughran commented on HADOOP-9363:


Looking at this. So the root cause is the back end isn't sending back a valid 
response; the client is NPEing. 

I'm looking at the code in the JDK to see how the NPE could be triggered, but 
it's not immediately obvious. I don't see any diffs between Java 7u45 and Java 
8, so have to assume that if there is a problem, it's still there.

What about catching any RuntimeException raised as this point, rethrow it as an 
IOE, and including the URL at fault in the message?

> AuthenticatedURL will NPE if server closes connection
> -
>
> Key: HADOOP-9363
> URL: https://issues.apache.org/jira/browse/HADOOP-9363
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>
> A NPE occurs if the server unexpectedly closes the connection for an 
> {{AuthenticatedURL}} w/o sending a response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-9645) KerberosAuthenticator NPEs on connect error

2016-03-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9645.

  Resolution: Duplicate
Target Version/s:   (was: 2.1.0-beta)

> KerberosAuthenticator NPEs on connect error
> ---
>
> Key: HADOOP-9645
> URL: https://issues.apache.org/jira/browse/HADOOP-9645
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Daryn Sharp
>Priority: Critical
>
> A NPE occurs if there's a kerberos error during initial connect.  In this 
> case, the NN was using a HTTP service principal with a stale kvno.  It causes 
> webhdfs to fail in a non-user friendly manner by masking the real error from 
> the user.
> {noformat}
> java.lang.RuntimeException: java.lang.NullPointerException
> at
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1241)
> at
> sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2713)
> at
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:477)
> at
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.isNegotiate(KerberosAuthenticator.java:164)
> at
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:140)
> at
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
> at
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.openHttpUrlConnection(WebHdfsFileSystem.java:364)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9645) KerberosAuthenticator NPEs on connect error

2016-03-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210534#comment-15210534
 ] 

Steve Loughran commented on HADOOP-9645:


It's a duplicate of  HADOOP-9363; HDFS-3980 is a symptom of the same problem.

> KerberosAuthenticator NPEs on connect error
> ---
>
> Key: HADOOP-9645
> URL: https://issues.apache.org/jira/browse/HADOOP-9645
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.5-alpha
>Reporter: Daryn Sharp
>Priority: Critical
>
> A NPE occurs if there's a kerberos error during initial connect.  In this 
> case, the NN was using a HTTP service principal with a stale kvno.  It causes 
> webhdfs to fail in a non-user friendly manner by masking the real error from 
> the user.
> {noformat}
> java.lang.RuntimeException: java.lang.NullPointerException
> at
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1241)
> at
> sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2713)
> at
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:477)
> at
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.isNegotiate(KerberosAuthenticator.java:164)
> at
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:140)
> at
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
> at
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.openHttpUrlConnection(WebHdfsFileSystem.java:364)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2016-03-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11393:
--
Attachment: HADOOP-11393.02.patch

-02:
* rebase and update for new usages

> Revert HADOOP_PREFIX, go back to HADOOP_HOME
> 
>
> Key: HADOOP-11393
> URL: https://issues.apache.org/jira/browse/HADOOP-11393
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11393-00.patch, HADOOP-11393.01.patch, 
> HADOOP-11393.02.patch
>
>
> Today, Windows and parts of the Hadoop source code still use HADOOP_HOME.  
> The switch to HADOOP_PREFIX back in 0.21 or so didn't really accomplish what 
> it was intended to do and only helped confuse the situation.
> _HOME is a much more standard suffix and is, in fact, used for everything in 
> Hadoop except for the top level project home.  I think it would be beneficial 
> to use HADOOP_HOME in the shell code as the Official(tm) variable, still 
> honoring HADOOP_PREFIX if it is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210442#comment-15210442
 ] 

Hadoop QA commented on HADOOP-11393:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s {color} 
| {color:red} HADOOP-11393 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12731641/HADOOP-11393.01.patch 
|
| JIRA Issue | HADOOP-11393 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8914/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Revert HADOOP_PREFIX, go back to HADOOP_HOME
> 
>
> Key: HADOOP-11393
> URL: https://issues.apache.org/jira/browse/HADOOP-11393
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11393-00.patch, HADOOP-11393.01.patch
>
>
> Today, Windows and parts of the Hadoop source code still use HADOOP_HOME.  
> The switch to HADOOP_PREFIX back in 0.21 or so didn't really accomplish what 
> it was intended to do and only helped confuse the situation.
> _HOME is a much more standard suffix and is, in fact, used for everything in 
> Hadoop except for the top level project home.  I think it would be beneficial 
> to use HADOOP_HOME in the shell code as the Official(tm) variable, still 
> honoring HADOOP_PREFIX if it is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2016-03-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11393:
--
Labels:   (was: BB2015-05-TBR)

> Revert HADOOP_PREFIX, go back to HADOOP_HOME
> 
>
> Key: HADOOP-11393
> URL: https://issues.apache.org/jira/browse/HADOOP-11393
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11393-00.patch, HADOOP-11393.01.patch
>
>
> Today, Windows and parts of the Hadoop source code still use HADOOP_HOME.  
> The switch to HADOOP_PREFIX back in 0.21 or so didn't really accomplish what 
> it was intended to do and only helped confuse the situation.
> _HOME is a much more standard suffix and is, in fact, used for everything in 
> Hadoop except for the top level project home.  I think it would be beneficial 
> to use HADOOP_HOME in the shell code as the Official(tm) variable, still 
> honoring HADOOP_PREFIX if it is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12947) Update documentation Hadoop Groups Mapping to add static group mapping, negative cache

2016-03-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210340#comment-15210340
 ] 

Wei-Chiu Chuang commented on HADOOP-12947:
--

Thanks a lot for the quick reviewing and committing!

> Update documentation Hadoop Groups Mapping to add static group mapping, 
> negative cache
> --
>
> Key: HADOOP-12947
> URL: https://issues.apache.org/jira/browse/HADOOP-12947
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12947.001.patch
>
>
> After _Hadoop Group Mapping_ was written, I subsequently found a number of 
> other things that should be added/updated: 
> # static group mapping, statically map users to group names (HADOOP-10142)
> # negative cache, to avoid spamming NameNode with invalid user names 
> (HADOOP-10755)
> # update query pattern for LDAP groups mapping if posix semantics is 
> supported. (HADOOP-9477)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-03-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210331#comment-15210331
 ] 

Steve Loughran commented on HADOOP-12774:
-

probably

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: John Zhuge
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12642) Update documentation to cover fs.s3.buffer.dir enhancements

2016-03-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210316#comment-15210316
 ] 

Steve Loughran commented on HADOOP-12642:
-

We'll need a patch for this from someone

> Update documentation to cover fs.s3.buffer.dir enhancements
> ---
>
> Key: HADOOP-12642
> URL: https://issues.apache.org/jira/browse/HADOOP-12642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.6.0
>Reporter: Jason Archip
>Priority: Minor
>
> Could you please update the documentation at 
> https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html 
> to include the options for the updated fs.s3.buffer.dir
> Please let me know if there is a different place to put my request
> Thanks,
> Jason Archip



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12924) Add default coder key for creating raw coders

2016-03-24 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210167#comment-15210167
 ] 

Rui Li commented on HADOOP-12924:
-

Based on my offline discussion with Kai, we have the following proposal:
# We can consider HDFS-RAID and new Java coder as different codec, so that 
they'll map to different EC policies. Within each codec, we can have different 
implementations of raw coders which are interoperable. For HDFS-RAID codec, the 
legacy coder is the only implementation. For "other RS" codec (we can figure 
out a better name of course), we have the new Java coder and will probably have 
the ISA-L coder soon.
# Each codec will have a dedicated raw coder configuration key, so that user 
can conveniently switch between different implementations. And switching raw 
coder doesn't require porting/re-encoding data, as long as the raw coders 
belong to the same codec.

[~drankye] please add if I missed anything.
[~zhz] what's your opinion about this?

> Add default coder key for creating raw coders
> -
>
> Key: HADOOP-12924
> URL: https://issues.apache.org/jira/browse/HADOOP-12924
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HADOOP-12924.1.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HADOOP-12826?focusedCommentId=15194402=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15194402].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12960) Allow Enabling of S3 Path Style Addressing for Accessing the s3a Endpoint

2016-03-24 Thread Stephen Montgomery (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210164#comment-15210164
 ] 

Stephen Montgomery commented on HADOOP-12960:
-

I attempted to re-open HDFS-8727 but got no info on how to progress the ticket 
ie re-open or just create a new ticket (ie this one). I see a lot of the s3a 
fixes are going into the 2.8 branch so should I patch against that branch or 
the 2.7.x ones where the behaviour is observed.

Thanks,
Stephen

> Allow Enabling of S3 Path Style Addressing for Accessing the s3a Endpoint
> -
>
> Key: HADOOP-12960
> URL: https://issues.apache.org/jira/browse/HADOOP-12960
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Stephen Montgomery
>
> Reopening of HDFS-8727.
> There is no ability to specify using path style access for the s3 endpoint. 
> There are numerous non-amazon implementations of storage that support the 
> amazon API's but only support path style access such as Cleversafe and Ceph. 
> Additionally in many environments it is difficult to configure DNS correctly 
> to get virtual host style addressing to work. For more information on S3 path 
> style access behaviour see 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12960) Allow Enabling of S3 Path Style Addressing for Accessing the s3a Endpoint

2016-03-24 Thread Stephen Montgomery (JIRA)
Stephen Montgomery created HADOOP-12960:
---

 Summary: Allow Enabling of S3 Path Style Addressing for Accessing 
the s3a Endpoint
 Key: HADOOP-12960
 URL: https://issues.apache.org/jira/browse/HADOOP-12960
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.7.2
Reporter: Stephen Montgomery


Reopening of HDFS-8727.

There is no ability to specify using path style access for the s3 endpoint. 
There are numerous non-amazon implementations of storage that support the 
amazon API's but only support path style access such as Cleversafe and Ceph. 
Additionally in many environments it is difficult to configure DNS correctly to 
get virtual host style addressing to work. For more information on S3 path 
style access behaviour see 
http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210155#comment-15210155
 ] 

Hadoop QA commented on HADOOP-12911:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 14s {color} 
| {color:red} root-jdk1.8.0_74 with JDK v1.8.0_74 generated 1 new + 738 
unchanged - 0 fixed = 739 total (was 738) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 7s {color} 
| {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 734 
unchanged - 0 fixed = 735 total (was 734) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} root: patch generated 0 new + 88 unchanged - 14 
fixed = 88 total (was 102) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 41s 
{color} | {color:red} hadoop-common-project/hadoop-minikdc generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 26s 
{color} | {color:green} the patch 

[jira] [Commented] (HADOOP-12955) checknative failed when checking ISA-L library

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210091#comment-15210091
 ] 

Hadoop QA commented on HADOOP-12955:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
56s {color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
34s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
58s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 55m 44s 
{color} | {color:green} root-jdk1.8.0_74 with JDK v1.8.0_74 generated 0 new + 
11 unchanged - 10 fixed = 11 total (was 21) {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 58s 
{color} | {color:green} root in the patch passed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
20s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 68m 5s 
{color} | {color:green} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 0 new + 
21 unchanged - 10 fixed = 21 total (was 31) {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 20s 
{color} | {color:green} root in the patch passed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 15s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 25s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
40s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 48s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.fs.shell.find.TestIname |
|   | hadoop.fs.shell.find.TestPrint0 |
|   | hadoop.fs.shell.find.TestPrint |
|   | hadoop.fs.shell.find.TestName |
| JDK v1.7.0_95 Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.fs.shell.find.TestIname |
|   | hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
|   | hadoop.fs.shell.find.TestPrint0 |
|   | hadoop.fs.shell.find.TestPrint |
|   | hadoop.fs.shell.find.TestName |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795177/HADOOP-12955-v2.patch 
|
| JIRA Issue | HADOOP-12955 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux cb98028f283e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 

[jira] [Commented] (HADOOP-12959) Add additional web site for ISA-L library

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15210059#comment-15210059
 ] 

Hadoop QA commented on HADOOP-12959:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 42s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12795174/HADOOP-12959.1.patch |
| JIRA Issue | HADOOP-12959 |
| Optional Tests |  asflicense  |
| uname | Linux e3dce27ad4b7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 19b645c |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8912/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Add additional web site for ISA-L library
> -
>
> Key: HADOOP-12959
> URL: https://issues.apache.org/jira/browse/HADOOP-12959
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HADOOP-12959.1.patch
>
>
> A github repository has been created for ISA-L. This sub task will add the 
> new URL to Building.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12924) Add default coder key for creating raw coders

2016-03-24 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209989#comment-15209989
 ] 

Kai Zheng commented on HADOOP-12924:


bq. I think the bottomline is that the file metadata (stored in the header) 
should present HDFS-RAID and the new Java RS coder differently.
Yeah, agree.
bq. The concept of Reed-Solomon / RS is actually very broad. Strictly speaking 
XOR is also a special case of RS.
I appreciate this understanding. It's true. Going this way would avoid awkward 
situations that missed data are not recoverable due to the change of underlying 
concrete coders in the umbrella of the same so-called Reed-Solomon.
I have had an off-line discussion with [~lirui] about the required changes to 
fill this gap raised here. Hopefully we can move on!

> Add default coder key for creating raw coders
> -
>
> Key: HADOOP-12924
> URL: https://issues.apache.org/jira/browse/HADOOP-12924
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Minor
> Attachments: HADOOP-12924.1.patch
>
>
> As suggested 
> [here|https://issues.apache.org/jira/browse/HADOOP-12826?focusedCommentId=15194402=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15194402].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12701) Run checkstyle on test source files

2016-03-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15209988#comment-15209988
 ] 

Hadoop QA commented on HADOOP-12701:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
21s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 46s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 13m 
26s {color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
47s {color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
21s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 13m 
11s {color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 58s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 10s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 32s 
{color} | {color:red} Patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 170m 39s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.fs.shell.find.TestIname |
|   | hadoop.fs.shell.find.TestPrint0 |
|   | hadoop.fs.shell.find.TestPrint |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.util.TestNativeLibraryChecker |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781478/HADOOP-12701.001.patch
 |
| JIRA Issue | HADOOP-12701 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 984c60f1dd32 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Updated] (HADOOP-12955) checknative failed when checking ISA-L library

2016-03-24 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12955:
---
Attachment: HADOOP-12955-v2.patch

Updated the patch according to review comments.

> checknative failed when checking ISA-L library
> --
>
> Key: HADOOP-12955
> URL: https://issues.apache.org/jira/browse/HADOOP-12955
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12955-v1.patch, HADOOP-12955-v2.patch
>
>
> Ref. the comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-11540?focusedCommentId=15207619=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207619].
>  
> When run hadoop checknative, it also failed. Got something like below from 
> log:
> {noformat}
> Stack: [0x7f2b9d405000,0x7f2b9d506000],  sp=0x7f2b9d504748,  free 
> space=1021k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0xa90c90]  UTF8::unicode_length(char const*)+0x0
> V  [libjvm.so+0x6ddfc3]  jni_NewStringUTF+0xc3
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> V  [libjvm.so+0x68c616]  JavaCalls::call_helper(JavaValue*, methodHandle*, 
> JavaCallArguments*, Thread*)+0x1056
> V  [libjvm.so+0x6cdc32]  jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, 
> JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x362
> V  [libjvm.so+0x6ea63a]  jni_CallStaticVoidMethod+0x17a
> C  [libjli.so+0x7bcc]  JavaMain+0x80c
> C  [libpthread.so.0+0x8182]  start_thread+0xc2
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> j  
> org.apache.hadoop.io.erasurecode.ErasureCodeNative.getLibraryName()Ljava/lang/String;+0
> j  org.apache.hadoop.util.NativeLibraryChecker.main([Ljava/lang/String;)V+212
> v  ~StubRoutines::call_stub
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12959) Add additional web site for ISA-L library

2016-03-24 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HADOOP-12959:
---
Status: Patch Available  (was: Open)

> Add additional web site for ISA-L library
> -
>
> Key: HADOOP-12959
> URL: https://issues.apache.org/jira/browse/HADOOP-12959
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HADOOP-12959.1.patch
>
>
> A github repository has been created for ISA-L. This sub task will add the 
> new URL to Building.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12959) Add additional web site for ISA-L library

2016-03-24 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HADOOP-12959:
---
Attachment: HADOOP-12959.1.patch

> Add additional web site for ISA-L library
> -
>
> Key: HADOOP-12959
> URL: https://issues.apache.org/jira/browse/HADOOP-12959
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HADOOP-12959.1.patch
>
>
> A github repository has been created for ISA-L. This sub task will add the 
> new URL to Building.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12959) Add additional web site for ISA-L library

2016-03-24 Thread Li Bo (JIRA)
Li Bo created HADOOP-12959:
--

 Summary: Add additional web site for ISA-L library
 Key: HADOOP-12959
 URL: https://issues.apache.org/jira/browse/HADOOP-12959
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Li Bo


A github repository has been created for ISA-L. This sub task will add the new 
URL to Building.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12959) Add additional web site for ISA-L library

2016-03-24 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo reassigned HADOOP-12959:
--

Assignee: Li Bo

> Add additional web site for ISA-L library
> -
>
> Key: HADOOP-12959
> URL: https://issues.apache.org/jira/browse/HADOOP-12959
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
>
> A github repository has been created for ISA-L. This sub task will add the 
> new URL to Building.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)