[jira] [Commented] (HADOOP-12780) During atomic rename handle crash when one directory has been renamed but not file under it.

2016-02-09 Thread madhumita chakraborty (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139632#comment-15139632
 ] 

madhumita chakraborty commented on HADOOP-12780:


[~cnauroth] Could you please take a look at the patch?

> During atomic rename handle crash when one directory has been renamed but not 
> file under it.
> 
>
> Key: HADOOP-12780
> URL: https://issues.apache.org/jira/browse/HADOOP-12780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12780.001.patch
>
>
> During atomic folder rename process preperaion we record the proposed change 
> to a metadata file (-renamePending.json).
> Say we are renaming parent/folderToRename to parent/renamedFolder.
> folderToRename has an inner folder innerFolder and innerFolder has a file 
> innerFile
> Content of the –renamePending.json file will be
> {   OldFolderName: parent/ folderToRename",
>  NewFolderName: "parent/renamedFolder", 
>  FileList: [ "innerFolder", "innerFolder/innerFile" ]
>  }
> Atfirst we rename all files within the source directory and then rename the 
> source directory at the last step
> The steps are
> 1.Atfirst we will rename innerFolder, 
> 2.Then rename innerFolder/innerFile 
> 3.Then rename source directory folderToRename
> Say the process crashes after step 1.
> So innerFolder has been renamed. 
> Note that Azure storage does not natively support folder. So if a directory 
> created by mkdir command, we create an empty placeholder blob with metadata 
> for the directory.
> So after step 1, the empty blob corresponding to the directory innerFolder 
> has been renamed.
> When the process comes up, in redo path it will go through the 
> –renamePending.json file try to redo the renames. 
> For each file in file list of renamePending file it checks if the source file 
> exists, if source file exists then it renames the file. When it gets 
> innerFolder, it calls filesystem.exists(innerFolder). Now 
> filesystem.exists(innerFolder) will return true, because file under that 
> folder exists even though the empty blob corresponding th that folder does 
> not exist. So it will try to rename this folder, and as the empty blob has 
> already been deleted so this fails with exception that “source blob does not 
> exist”.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12788) OpensslAesCtrCryptoCodec should log what random number generator is used.

2016-02-09 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12788:


 Summary: OpensslAesCtrCryptoCodec should log what random number 
generator is used.
 Key: HADOOP-12788
 URL: https://issues.apache.org/jira/browse/HADOOP-12788
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


{{OpensslAesCtrCryptoCodec}} uses random number generator, for example, 
{{OsSecureRandom}}, {{}OpensslSecureRandom} or {{SecureRandom}} but it's not 
clear which one would be loaded at runtime.

It would help debugging if we can print a debug message that states which one 
is loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12786) "hadoop key" command usage is not documented

2016-02-09 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-12786:
--

 Summary: "hadoop key" command usage is not documented
 Key: HADOOP-12786
 URL: https://issues.apache.org/jira/browse/HADOOP-12786
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA


I found "hadoop key" command usage is not documented when reviewing HDFS-9784.
In addition, we should document that uppercase is not allowed for key name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2016-02-09 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139596#comment-15139596
 ] 

Greg Senia commented on HADOOP-9969:


[~daryn] I have reached out to IBM JDK Security team to try to get info on if 
IBM is doing it correctly.. I patched my HDP build from HWX and it seems to 
solve the issues.. But waiting to hear from IBM JDK folks... Any other info on 
plans to integrate this into the Core Hadoop build would be great..

thanks

> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta, 2.5.0, 2.5.2, 2.6.0, 2.6.1, 2.8.0, 2.7.1, 
> 2.6.2, 2.6.3
> Environment: IBM JDK7
>Reporter: Yu Gao
> Attachments: HADOOP-9969.patch, JobTracker.log
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12787) KMS SPNEGO sequence does not work with WEBHDFS

2016-02-09 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12787:

Component/s: security
 kms

> KMS SPNEGO sequence does not work with WEBHDFS
> --
>
> Key: HADOOP-12787
> URL: https://issues.apache.org/jira/browse/HADOOP-12787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 2.6.3
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> This was a follow up of my 
> [comments|https://issues.apache.org/jira/browse/HADOOP-12559?focusedCommentId=15059045=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15059045]
>  for HADOOP-10698.
> It blocks a delegation token based user (MR) using WEBHDFS to access KMS 
> server for encrypted files. This might work in many cases before as JDK 7 has 
> been aggressively do SPENGO implicitly. However, this is not the case in JDK 
> 8 as we have seen many failures when using WEBHDFS with KMS and HDFS 
> encryption zone.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12788) OpensslAesCtrCryptoCodec should log what random number generator is used.

2016-02-09 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12788:
-
Description: 
{{OpensslAesCtrCryptoCodec}} uses random number generator, for example, 
{{OsSecureRandom}}, {{OpensslSecureRandom}} or {{SecureRandom}} but it's not 
clear which one would be loaded at runtime.

It would help debugging if we can print a debug message that states which one 
is loaded.

  was:
{{OpensslAesCtrCryptoCodec}} uses random number generator, for example, 
{{OsSecureRandom}}, {{}OpensslSecureRandom} or {{SecureRandom}} but it's not 
clear which one would be loaded at runtime.

It would help debugging if we can print a debug message that states which one 
is loaded.


> OpensslAesCtrCryptoCodec should log what random number generator is used.
> -
>
> Key: HADOOP-12788
> URL: https://issues.apache.org/jira/browse/HADOOP-12788
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
>
> {{OpensslAesCtrCryptoCodec}} uses random number generator, for example, 
> {{OsSecureRandom}}, {{OpensslSecureRandom}} or {{SecureRandom}} but it's not 
> clear which one would be loaded at runtime.
> It would help debugging if we can print a debug message that states which one 
> is loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12783) TestWebDelegationToken failure: login options not compatible with IBM JDK

2016-02-09 Thread Devendra Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devendra Vishwakarma updated HADOOP-12783:
--
Status: Patch Available  (was: Open)

> TestWebDelegationToken failure: login options not compatible with IBM JDK
> -
>
> Key: HADOOP-12783
> URL: https://issues.apache.org/jira/browse/HADOOP-12783
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
> Environment: IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: Hadoop, IBM_JAVA
> Fix For: 2.7.1
>
> Attachments: HADOOP-12783-1.patch
>
>
> When running test with IBM JDK, the testcase in 
> /hadoop-common-/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken
>  failed due to incompatible login options for IBM Java.
> The login options need to be updated considering the IBM Java.
> Testcases which failed are - 
> 1. 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator
> 2. 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticatorWithDoAs
> Testcases failed with the following stack:
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> at 
> com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:27)
> at com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:541)
> at com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:169)
> at 
> com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:507)
> at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
> at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
> at java.security.AccessController.doPrivileged(AccessController.java:595)
> at 
> javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
> at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:710)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:777)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12783) TestWebDelegationToken failure: login options not compatible with IBM JDK

2016-02-09 Thread Devendra Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devendra Vishwakarma updated HADOOP-12783:
--
Attachment: HADOOP-12783-1.patch

Attached patch

> TestWebDelegationToken failure: login options not compatible with IBM JDK
> -
>
> Key: HADOOP-12783
> URL: https://issues.apache.org/jira/browse/HADOOP-12783
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
> Environment: IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: Hadoop, IBM_JAVA
> Fix For: 2.7.1
>
> Attachments: HADOOP-12783-1.patch
>
>
> When running test with IBM JDK, the testcase in 
> /hadoop-common-/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken
>  failed due to incompatible login options for IBM Java.
> The login options need to be updated considering the IBM Java.
> Testcases which failed are - 
> 1. 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator
> 2. 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticatorWithDoAs
> Testcases failed with the following stack:
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> at 
> com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:27)
> at com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:541)
> at com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:169)
> at 
> com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:507)
> at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
> at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
> at java.security.AccessController.doPrivileged(AccessController.java:595)
> at 
> javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
> at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:710)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:777)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12783) TestWebDelegationToken failure: login options not compatible with IBM JDK

2016-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138726#comment-15138726
 ] 

Hadoop QA commented on HADOOP-12783:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 32s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 55s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 35s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787027/HADOOP-12783-1.patch |
| JIRA Issue | HADOOP-12783 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 42b18e1af413 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / acac729 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12784) TestKMS failure: login options not compatible with IBM JDK

2016-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138757#comment-15138757
 ] 

Hadoop QA commented on HADOOP-12784:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 57s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 32s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 36s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787029/HADOOP-12784-1.patch |
| JIRA Issue | HADOOP-12784 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a373637c7088 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / acac729 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 

[jira] [Commented] (HADOOP-12783) TestWebDelegationToken failure: login options not compatible with IBM JDK

2016-02-09 Thread Devendra Vishwakarma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138548#comment-15138548
 ] 

Devendra Vishwakarma commented on HADOOP-12783:
---

This is similar to the one mentioned in 
[https://issues.apache.org/jira/browse/HADOOP-11273]

> TestWebDelegationToken failure: login options not compatible with IBM JDK
> -
>
> Key: HADOOP-12783
> URL: https://issues.apache.org/jira/browse/HADOOP-12783
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
> Environment: IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: Hadoop, IBM_JAVA
> Fix For: 2.7.1
>
>
> When running test with IBM JDK, the testcase in 
> /hadoop-common-/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken
>  failed due to incompatible login options for IBM Java.
> The login options need to be updated considering the IBM Java.
> Testcases which failed are - 
> 1. 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator
> 2. 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticatorWithDoAs
> Testcases failed with the following stack:
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> at 
> com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:27)
> at com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:541)
> at com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:169)
> at 
> com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:507)
> at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
> at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
> at java.security.AccessController.doPrivileged(AccessController.java:595)
> at 
> javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
> at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:710)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:777)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12783) TestWebDelegationToken failure: login options not compatible with IBM JDK

2016-02-09 Thread Devendra Vishwakarma (JIRA)
Devendra Vishwakarma created HADOOP-12783:
-

 Summary: TestWebDelegationToken failure: login options not 
compatible with IBM JDK
 Key: HADOOP-12783
 URL: https://issues.apache.org/jira/browse/HADOOP-12783
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.1
 Environment: IBM JDK 1.8 + s390x architecture
Reporter: Devendra Vishwakarma
Assignee: Devendra Vishwakarma
 Fix For: 2.7.1


When running test with IBM JDK, the testcase in 
/hadoop-common-/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken
 failed due to incompatible login options for IBM Java.
The login options need to be updated considering the IBM Java.

Testcases which failed are - 
1. 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator
2. 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticatorWithDoAs

Testcases failed with the following stack:
javax.security.auth.login.LoginException: Bad JAAS configuration: unrecognized 
option: isInitiator
at 
com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:27)
at com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:541)
at com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:169)
at com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:507)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
at java.security.AccessController.doPrivileged(AccessController.java:595)
at 
javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:710)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:777)
at 
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12784) TestKMS failure: login options not compatible with IBM JDK

2016-02-09 Thread Devendra Vishwakarma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138623#comment-15138623
 ] 

Devendra Vishwakarma commented on HADOOP-12784:
---

This is similar to the one mentioned in 
https://issues.apache.org/jira/browse/HADOOP-11273

> TestKMS failure: login options not compatible with IBM JDK
> --
>
> Key: HADOOP-12784
> URL: https://issues.apache.org/jira/browse/HADOOP-12784
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
> Environment: IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: Hadoop, IBM_JAVA
> Fix For: 2.7.1
>
>
> When running test with IBM JDK, the testcase in 
> /hadoop-common-/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken
>  failed due to incompatible login options for IBM Java.
> The login options need to be updated considering the IBM Java.
> Testcases which failed are - 
>  1.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSRestartKerberosAuth
>  2.org.apache.hadoop.crypto.key.kms.server.TestKMS.testDelegationTokenAccess
>  3.org.apache.hadoop.crypto.key.kms.server.TestKMS.testServicePrincipalACLs
>  4.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs
>  5.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKDTSM
>  6.org.apache.hadoop.crypto.key.kms.server.TestKMS.testACLs
>  7.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKSignerAndDTSM
>  8.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKSigner
>  9.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSRestartSimpleAuth
>  10.org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpKerberos
>  11.org.apache.hadoop.crypto.key.kms.server.TestKMS.testWebHDFSProxyUserKerb
>  12.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSBlackList
>  13.org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsKerberos
>  14.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSAuthFailureRetry
> Testcases failed with the following stack:
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> at 
> com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:27)
> at com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:541)
> at com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:169)
> at 
> com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:507)
> at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
> at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
> at java.security.AccessController.doPrivileged(AccessController.java:595)
> at 
> javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
> at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
> at org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:262)
> at org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12784) TestKMS failure: login options not compatible with IBM JDK

2016-02-09 Thread Devendra Vishwakarma (JIRA)
Devendra Vishwakarma created HADOOP-12784:
-

 Summary: TestKMS failure: login options not compatible with IBM JDK
 Key: HADOOP-12784
 URL: https://issues.apache.org/jira/browse/HADOOP-12784
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.1
 Environment: IBM JDK 1.8 + s390x architecture
Reporter: Devendra Vishwakarma
Assignee: Devendra Vishwakarma
 Fix For: 2.7.1


When running test with IBM JDK, the testcase in 
/hadoop-common-/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken
 failed due to incompatible login options for IBM Java.
The login options need to be updated considering the IBM Java.

Testcases which failed are - 
 1.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSRestartKerberosAuth
 2.org.apache.hadoop.crypto.key.kms.server.TestKMS.testDelegationTokenAccess
 3.org.apache.hadoop.crypto.key.kms.server.TestKMS.testServicePrincipalACLs
 4.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs
 5.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKDTSM
 6.org.apache.hadoop.crypto.key.kms.server.TestKMS.testACLs
 7.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKSignerAndDTSM
 8.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKSigner
 9.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSRestartSimpleAuth
 10.org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpKerberos
 11.org.apache.hadoop.crypto.key.kms.server.TestKMS.testWebHDFSProxyUserKerb
 12.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSBlackList
 13.org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsKerberos
 14.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSAuthFailureRetry

Testcases failed with the following stack:
javax.security.auth.login.LoginException: Bad JAAS configuration: unrecognized 
option: isInitiator
at 
com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:27)
at com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:541)
at com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:169)
at com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:507)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
at java.security.AccessController.doPrivileged(AccessController.java:595)
at 
javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
at org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:262)
at org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12784) TestKMS failure: login options not compatible with IBM JDK

2016-02-09 Thread Devendra Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devendra Vishwakarma updated HADOOP-12784:
--
Attachment: HADOOP-12784-1.patch

Attached patch

> TestKMS failure: login options not compatible with IBM JDK
> --
>
> Key: HADOOP-12784
> URL: https://issues.apache.org/jira/browse/HADOOP-12784
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
> Environment: IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: Hadoop, IBM_JAVA
> Fix For: 2.7.1
>
> Attachments: HADOOP-12784-1.patch
>
>
> When running test with IBM JDK, the testcase in 
> /hadoop-common-/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken
>  failed due to incompatible login options for IBM Java.
> The login options need to be updated considering the IBM Java.
> Testcases which failed are - 
>  1.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSRestartKerberosAuth
>  2.org.apache.hadoop.crypto.key.kms.server.TestKMS.testDelegationTokenAccess
>  3.org.apache.hadoop.crypto.key.kms.server.TestKMS.testServicePrincipalACLs
>  4.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs
>  5.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKDTSM
>  6.org.apache.hadoop.crypto.key.kms.server.TestKMS.testACLs
>  7.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKSignerAndDTSM
>  8.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKSigner
>  9.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSRestartSimpleAuth
>  10.org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpKerberos
>  11.org.apache.hadoop.crypto.key.kms.server.TestKMS.testWebHDFSProxyUserKerb
>  12.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSBlackList
>  13.org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsKerberos
>  14.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSAuthFailureRetry
> Testcases failed with the following stack:
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> at 
> com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:27)
> at com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:541)
> at com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:169)
> at 
> com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:507)
> at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
> at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
> at java.security.AccessController.doPrivileged(AccessController.java:595)
> at 
> javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
> at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
> at org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:262)
> at org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12784) TestKMS failure: login options not compatible with IBM JDK

2016-02-09 Thread Devendra Vishwakarma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devendra Vishwakarma updated HADOOP-12784:
--
Status: Patch Available  (was: Open)

> TestKMS failure: login options not compatible with IBM JDK
> --
>
> Key: HADOOP-12784
> URL: https://issues.apache.org/jira/browse/HADOOP-12784
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.7.1
> Environment: IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: Hadoop, IBM_JAVA
> Fix For: 2.7.1
>
> Attachments: HADOOP-12784-1.patch
>
>
> When running test with IBM JDK, the testcase in 
> /hadoop-common-/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken
>  failed due to incompatible login options for IBM Java.
> The login options need to be updated considering the IBM Java.
> Testcases which failed are - 
>  1.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSRestartKerberosAuth
>  2.org.apache.hadoop.crypto.key.kms.server.TestKMS.testDelegationTokenAccess
>  3.org.apache.hadoop.crypto.key.kms.server.TestKMS.testServicePrincipalACLs
>  4.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs
>  5.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKDTSM
>  6.org.apache.hadoop.crypto.key.kms.server.TestKMS.testACLs
>  7.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKSignerAndDTSM
>  8.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSWithZKSigner
>  9.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSRestartSimpleAuth
>  10.org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpKerberos
>  11.org.apache.hadoop.crypto.key.kms.server.TestKMS.testWebHDFSProxyUserKerb
>  12.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSBlackList
>  13.org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsKerberos
>  14.org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSAuthFailureRetry
> Testcases failed with the following stack:
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> at 
> com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:27)
> at com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:541)
> at com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:169)
> at 
> com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:507)
> at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
> at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
> at java.security.AccessController.doPrivileged(AccessController.java:595)
> at 
> javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
> at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
> at org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:262)
> at org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:75)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12723) S3A: Add ability to plug in any AWSCredentialsProvider

2016-02-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138790#comment-15138790
 ] 

Steve Loughran commented on HADOOP-12723:
-

Busy; the usuaI. If it makes you feel better I suffer from a lack of review of 
my own patches (YARN-679 anyone?) —if its not considered critical not enough 
people put the time in. I do feel you pain

I was actually running through the fs/s3 stuff to see if there was anything 
easy to pull in; I missed this one as it wasn't linked too off HADOOP-11694; 
I'll fix that now.

I looked at HADOOP-12537, which is the related one, and saw that it added a new 
JAR to list. as soon as I saw that I felt that it wasn't something for a 
last-minute patch, it'd need more experimentation (specifically, what would 
happen if the new JAR —listed as for test only— wasn't there?)

> S3A: Add ability to plug in any AWSCredentialsProvider
> --
>
> Key: HADOOP-12723
> URL: https://issues.apache.org/jira/browse/HADOOP-12723
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Steven Wong
>Assignee: Steven Wong
> Attachments: HADOOP-12723.0.patch
>
>
> Although S3A currently has built-in support for 
> {{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
> {{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
> {{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
> support any other credentials provider that implements the 
> {{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the 
> ability to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance 
> will expand the options for S3 credentials, such as:
> * temporary credentials from STS, e.g. via 
> {{com.amazonaws.auth.STSSessionCredentialsProvider}}
> * IAM role-based credentials, e.g. via 
> {{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
> * a custom credentials provider that satisfies one's own needs, e.g. 
> bucket-specific credentials, user-specific credentials, etc.
> To support this, we can add a configuration for the fully qualified class 
> name of a credentials provider, to be loaded by {{S3AFileSystem.initialize}} 
> and added to its credentials provider chain.
> The configured credentials provider should implement 
> {{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
> accepts {{(URI uri, Configuration conf)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12752) Improve diagnostics/use of envvar/sysprop credential propagation

2016-02-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138802#comment-15138802
 ] 

Steve Loughran commented on HADOOP-12752:
-

thx for the review and commit. together we can fix kerberos. Or at least make 
it possible to work out why it isn't working.

> Improve diagnostics/use of envvar/sysprop credential propagation
> 
>
> Key: HADOOP-12752
> URL: https://issues.apache.org/jira/browse/HADOOP-12752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12752-001.patch, HADOOP-12752-002.patch
>
>
> * document the system property {{hadoop.token.files}}. 
> * document the env var {{HADOOP_TOKEN_FILE_LOCATION}}.
> * When UGI inits tokens off that or the env var , log this fact
> * when trying to load a file referenced in the env var (a) trim it and (b) 
> check for it existing, failing with a message referring to the ENV var as 
> well as the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12785) [Handling exceptions] LdapGroupsMapping.getGroups() do not provide information about root cause

2016-02-09 Thread Mukhadin Buzdov (JIRA)
Mukhadin Buzdov created HADOOP-12785:


 Summary: [Handling exceptions] LdapGroupsMapping.getGroups() do 
not provide information about root cause
 Key: HADOOP-12785
 URL: https://issues.apache.org/jira/browse/HADOOP-12785
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.7.1
 Environment: _Operating system_: CentOS Linux 7 
{color:gray}(7.1.1503){color}
_Platform_: HDP 2.3.4.0, Ambari 2.1.2
Reporter: Mukhadin Buzdov
Priority: Minor


_CommunicationException_ and _NamingException_ are not logged in 
_LdapGroupsMapping.getGroups()_.

{code:title=LdapGroupsMapping.java|borderStyle=solid}
  public synchronized List getGroups(String user) throws IOException {
List emptyResults = new ArrayList();
// ...
try {
  return doGetGroups(user);
} catch (CommunicationException e) {
  LOG.warn("Connection is closed, will try to reconnect");
} catch (NamingException e) {
  LOG.warn("Exception trying to get groups for user " + user + ": " + 
e.getMessage());
  return emptyResults;
}
//...
return emptyResults;
  }
{code}

{color:red}It is not possible to understand _LDAP_ level failures.{color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12783) TestWebDelegationToken failure: login options not compatible with IBM JDK

2016-02-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12783:

Target Version/s:   (was: 2.7.1)
   Fix Version/s: (was: 2.7.1)
 Component/s: security

> TestWebDelegationToken failure: login options not compatible with IBM JDK
> -
>
> Key: HADOOP-12783
> URL: https://issues.apache.org/jira/browse/HADOOP-12783
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Affects Versions: 2.7.1
> Environment: IBM JDK 1.8 + s390x architecture
>Reporter: Devendra Vishwakarma
>Assignee: Devendra Vishwakarma
>  Labels: Hadoop, IBM_JAVA
> Attachments: HADOOP-12783-1.patch
>
>
> When running test with IBM JDK, the testcase in 
> /hadoop-common-/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken
>  failed due to incompatible login options for IBM Java.
> The login options need to be updated considering the IBM Java.
> Testcases which failed are - 
> 1. 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator
> 2. 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticatorWithDoAs
> Testcases failed with the following stack:
> javax.security.auth.login.LoginException: Bad JAAS configuration: 
> unrecognized option: isInitiator
> at 
> com.ibm.security.jgss.i18n.I18NException.throwLoginException(I18NException.java:27)
> at com.ibm.security.auth.module.Krb5LoginModule.d(Krb5LoginModule.java:541)
> at com.ibm.security.auth.module.Krb5LoginModule.a(Krb5LoginModule.java:169)
> at 
> com.ibm.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:231)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> at java.lang.reflect.Method.invoke(Method.java:507)
> at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)
> at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
> at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
> at java.security.AccessController.doPrivileged(AccessController.java:595)
> at 
> javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:719)
> at javax.security.auth.login.LoginContext.login(LoginContext.java:593)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.doAsKerberosUser(TestWebDelegationToken.java:710)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:777)
> at 
> org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken.testKerberosDelegationTokenAuthenticator(TestWebDelegationToken.java:729)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-09 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138804#comment-15138804
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


Thanks [~eddyxu] for the comments.

{quote}
* You mentioned in the above comments. But PrivateAzureDataLakeFileSystem does 
not call it within synchronized calls (e.g., 
PrivateAzureDataLakeFileSystem#create. Although syncMap is a synchronizedMap, 
putFileStatus has multiple operations on syncMap, which can not guarantee 
atomicity.

* It might be a better idea to provide atomicity in 
PrivateAzureDataLakeFileSystem. A couple of places have multiple cache calls 
within the same function (e.g., rename()).
{quote}

PutFileStatus has only 1 operation on syncMap. Could you please elaborate on 
the scenario which could be affected? To be certain, are you reviewing to 
HADOOP-12666-005.patch right?

{quote}
* It might be a good idea to rename FileStatusCacheManager#getFileStatus, 
putFileStatus, removeFileStatus to get/put/remove, because the class name 
already clearly indicates the context.
{quote}

 Agree. Renamed to get/put/remove

{quote}
* FileStatusCacheObject can only store an absolute expiration time. And its 
methods can be package-level methods.
{quote}

You are right, this is an alternate approach to handle cache expiration time. I 
think we can leave with current implementation using time to live check, Please 
let me know if you find any issue with that approach?  

{quote}
* I saw a few places, e.g., PrivateAzureDataLakeFileSystem#rename/delete, that 
clear the cache if the param is a directory. Could you justify the reason 
behind this? Would it cause noticeable performance degradation? Or as an 
alternative, using LinkedList + TreeMap for FileStatusCacheManager?
{quote}

Yes, To avoid performance & correction issue when directory is renamed/deleted. 
In such cases, Cache is holding stale entries and needs to be removed so that 
delete/rename followed by getFileStatus call (For file/folder present in the 
directory). At the point of folder deletion, Cache might be holding multiple 
FileStatus instances within directory. Its efficient to nuke the cache and 
rebuild it than iterate over.

The current cache is a basic implementation to hold FileStatus instances to 
start with and we would continue to enhance in upcoming changes.

{quote}
* One general question, is this FileStatusCacheManager in HdfsClient? If it is 
the case, how do you make them consistent across clients on multiple nodes?
{quote}

FileStatusCacheManager need not be consistent across clients. 
FileStatusCacheManager is build based on the ListStatus and GetFileStatus calls 
from the respective clients.

{quote}
* Can we use Precondtions here? It will be more descriptive.
{quote}

Are you referring to com.google.common.base.Preconditions? 


> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-09 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138968#comment-15138968
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


[~fabbri] - Corrected alias :) 

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-09 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138915#comment-15138915
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


Thank you [~cnauroth] for the comments. 

1. Yes, i will upload the respective document.
2. We do have extended Contact test cases which integrated with back-end 
service however those test are not pushed as part of this check in. I will 
create separate JIRA for the Live mode test cases.
3. We do run contract test cases.

For the code. 
1. I have refactored namespace as per comments from [~fabbri] and [~cnauroth] 
suggestion as 

||Namespace||Purpose||
|org.apache.hadoop.fs.adl|Public interface exposed for Hadoop application to 
integrate with. For long term support, this namespace to stay even if we remove 
refactor dependency on org.apache.hadoop.hdfs.web|
|org.apache.hadoop.hdfs.web|Extension of WebHdfsFileSystem to override 
protected functionality. Example ConnectionFactory access, Override redirection 
operation etc.|

2. {panel}PrivateAzureDataLakeFileSystem{panel} is exposed for 
{panel}AdlFileSystem{panel} to inherit. I will add the documentation for the 
same.
3. Intentional to not add lock. Even if multiple instances are created the last 
instance would be used across to refresh token.
4. Yes, Similar comment i got from [~chris.douglas] as well. Reason behind 
hiding logging through was to switch quickly between {panel}Log{panel} and 
{panel}System.out.println{panel} during debugging. Quickest way is change the 
code than configuration file. We will migrate to use SLF4J but not part of this 
patch release. is that fine? 
5. Explained above
6. Agree and incorporated the code change.
7. {panel}FileStatus{panel} cache management feature is configurable. In case 
of some scenarios are breaking for the customer, they can turn off the local 
cache. Cache scope is within the process. I will document on the behavior and 
tuning flags like duration of the cache. We do see great performance 
improvement however we do not wish to compromise on the correctness. 
8. {panel}ADLConfKeys#LOG_VERSION{panel} is to capture code instrumentation 
version. this information is used only during debugging session.
9. Excellent point. Bug was, mock server hung up 
{panel}TestAUploadBenchmark/TestADownloadBenchmark{panel} are not 
executed before other tests. I will investigate the root cause of this issue 
since you pointed out on the execution order not guaranteed.



> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12785) [Handling exceptions] LdapGroupsMapping.getGroups() do not provide information about root cause

2016-02-09 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139133#comment-15139133
 ] 

Wei-Chiu Chuang commented on HADOOP-12785:
--

Hi [~mukhadin], thanks for reporting this issue.
Just to be clear, what is the expected error message here? A NamingException is 
logged here, although CommunicationException message is not logged. I think we 
can log the query itself, and also the returned result.

> [Handling exceptions] LdapGroupsMapping.getGroups() do not provide 
> information about root cause
> ---
>
> Key: HADOOP-12785
> URL: https://issues.apache.org/jira/browse/HADOOP-12785
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
> Environment: _Operating system_: CentOS Linux 7 
> {color:gray}(7.1.1503){color}
> _Platform_: HDP 2.3.4.0, Ambari 2.1.2
>Reporter: Mukhadin Buzdov
>Priority: Minor
>  Labels: easyfix
>
> _CommunicationException_ and _NamingException_ are not logged in 
> _LdapGroupsMapping.getGroups()_.
> {code:title=LdapGroupsMapping.java|borderStyle=solid}
>   public synchronized List getGroups(String user) throws IOException {
> List emptyResults = new ArrayList();
> // ...
> try {
>   return doGetGroups(user);
> } catch (CommunicationException e) {
>   LOG.warn("Connection is closed, will try to reconnect");
> } catch (NamingException e) {
>   LOG.warn("Exception trying to get groups for user " + user + ": " + 
> e.getMessage());
>   return emptyResults;
> }
> //...
> return emptyResults;
>   }
> {code}
> {color:red}It is not possible to understand _LDAP_ level failures.{color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12785) [Handling exceptions] LdapGroupsMapping.getGroups() do not provide information about root cause

2016-02-09 Thread Mukhadin Buzdov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139195#comment-15139195
 ] 

Mukhadin Buzdov commented on HADOOP-12785:
--

[~jojochuang], thank you for paying attention!

I'm pretty fine with messages, suggestion was to print stack trace of 
exceptions, something like:
{code:title=WARN level|borderStyle=solid}
try {
// ...
} catch (CommunicationException e) {
  LOG.warn("Connection is closed, will try to reconnect", e);
} catch (NamingException e) {
  LOG.warn("Exception trying to get groups for user " + user, e);
  return emptyResults;
}
{code}

Or if it will generate to much logs - just to provide stack trace on DEBUG 
level:
{code:title=DEBUG level|borderStyle=solid}
try {
// ...
} catch (CommunicationException e) {
  LOG.warn("Connection is closed, will try to reconnect");
  LOG.debug("Root cause is", e);
} catch (NamingException e) {
  LOG.warn("Exception trying to get groups for user " + user + ": " + 
e.getMessage());
  LOG.debug("Root cause is", e);
  return emptyResults;
}
{code}



Had to write something similar to _LdapGroupsMapping_ today to understand what 
is wrong with my environment and configuration properties.
Exception message only may be not informative enough - today had a problem with 
self-signed CA and only the 3rd level exception says me this.

> [Handling exceptions] LdapGroupsMapping.getGroups() do not provide 
> information about root cause
> ---
>
> Key: HADOOP-12785
> URL: https://issues.apache.org/jira/browse/HADOOP-12785
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
> Environment: _Operating system_: CentOS Linux 7 
> {color:gray}(7.1.1503){color}
> _Platform_: HDP 2.3.4.0, Ambari 2.1.2
>Reporter: Mukhadin Buzdov
>Priority: Minor
>  Labels: easyfix
>
> _CommunicationException_ and _NamingException_ are not logged in 
> _LdapGroupsMapping.getGroups()_.
> {code:title=LdapGroupsMapping.java|borderStyle=solid}
>   public synchronized List getGroups(String user) throws IOException {
> List emptyResults = new ArrayList();
> // ...
> try {
>   return doGetGroups(user);
> } catch (CommunicationException e) {
>   LOG.warn("Connection is closed, will try to reconnect");
> } catch (NamingException e) {
>   LOG.warn("Exception trying to get groups for user " + user + ": " + 
> e.getMessage());
>   return emptyResults;
> }
> //...
> return emptyResults;
>   }
> {code}
> {color:red}It is not possible to understand _LDAP_ level failures.{color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12788) OpensslAesCtrCryptoCodec should log which random number generator is used.

2016-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139933#comment-15139933
 ] 

Hadoop QA commented on HADOOP-12788:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 57s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 31s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 31s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.net.TestClusterTopology |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787148/HADOOP-12788.002.patch
 |
| JIRA Issue | HADOOP-12788 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8a8f023a875c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build 

[jira] [Updated] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections

2016-02-09 Thread Min Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Min Shen updated HADOOP-12765:
--
Affects Version/s: 2.7.2
   2.6.3

> HttpServer2 should switch to using the non-blocking SslSelectChannelConnector 
> to prevent performance degradation when handling SSL connections
> --
>
> Key: HADOOP-12765
> URL: https://issues.apache.org/jira/browse/HADOOP-12765
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 2.6.3
>Reporter: Min Shen
>Assignee: Min Shen
> Attachments: HADOOP-12765.001.patch, blocking_1.png, blocking_2.png, 
> unblocking.png
>
>
> The current implementation uses the blocking SslSocketConnector which takes 
> the default maxIdleTime as 200 seconds. We noticed in our cluster that when 
> users use a custom client that accesses the WebHDFS REST APIs through https, 
> it could block all the 250 handler threads in NN jetty server, causing severe 
> performance degradation for accessing WebHDFS and NN web UI. Attached 
> screenshots (blocking_1.png and blocking_2.png) illustrate that when using 
> SslSocketConnector, the jetty handler threads are not released until the 200 
> seconds maxIdleTime has passed. With sufficient number of SSL connections, 
> this issue could render NN HttpServer to become entirely irresponsive.
> We propose to use the non-blocking SslSelectChannelConnector as a fix. We 
> have deployed the attached patch within our cluster, and have seen 
> significant improvement. The attached screenshot (unblocking.png) further 
> illustrates the behavior of NN jetty server after switching to using 
> SslSelectChannelConnector.
> The patch further disables SSLv3 protocol on server side to preserve the 
> spirit of HADOOP-11260.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-02-09 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12782:
-
Attachment: HADOOP-12782.002.patch

Rev02: fixed findbugs, checkstyle and javac warnings.

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12782.001.patch, HADOOP-12782.002.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12788) OpensslAesCtrCryptoCodec should log what random number generator is used.

2016-02-09 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12788:
-
Attachment: HADOOP-12788.001.patch

Rev01: log  the class name of random number generator used in debug messages.

> OpensslAesCtrCryptoCodec should log what random number generator is used.
> -
>
> Key: HADOOP-12788
> URL: https://issues.apache.org/jira/browse/HADOOP-12788
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12788.001.patch
>
>
> {{OpensslAesCtrCryptoCodec}} uses random number generator, for example, 
> {{OsSecureRandom}}, {{OpensslSecureRandom}} or {{SecureRandom}} but it's not 
> clear which one would be loaded at runtime.
> It would help debugging if we can print a debug message that states which one 
> is loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12788) OpensslAesCtrCryptoCodec should log what random number generator is used.

2016-02-09 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12788:
-
Status: Patch Available  (was: Open)

> OpensslAesCtrCryptoCodec should log what random number generator is used.
> -
>
> Key: HADOOP-12788
> URL: https://issues.apache.org/jira/browse/HADOOP-12788
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12788.001.patch
>
>
> {{OpensslAesCtrCryptoCodec}} uses random number generator, for example, 
> {{OsSecureRandom}}, {{OpensslSecureRandom}} or {{SecureRandom}} but it's not 
> clear which one would be loaded at runtime.
> It would help debugging if we can print a debug message that states which one 
> is loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12788) OpensslAesCtrCryptoCodec should log which random number generator is used.

2016-02-09 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12788:
-
Summary: OpensslAesCtrCryptoCodec should log which random number generator 
is used.  (was: OpensslAesCtrCryptoCodec should log what random number 
generator is used.)

> OpensslAesCtrCryptoCodec should log which random number generator is used.
> --
>
> Key: HADOOP-12788
> URL: https://issues.apache.org/jira/browse/HADOOP-12788
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12788.001.patch
>
>
> {{OpensslAesCtrCryptoCodec}} uses random number generator, for example, 
> {{OsSecureRandom}}, {{OpensslSecureRandom}} or {{SecureRandom}} but it's not 
> clear which one would be loaded at runtime.
> It would help debugging if we can print a debug message that states which one 
> is loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12788) OpensslAesCtrCryptoCodec should log which random number generator is used.

2016-02-09 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12788:
-
Attachment: HADOOP-12788.002.patch

Rev02: small change: only log the debug message after it successfully 
instantiates the random number generator class, so that if it fails to 
instantiates, it only logs one related message, to avoid confusion.

> OpensslAesCtrCryptoCodec should log which random number generator is used.
> --
>
> Key: HADOOP-12788
> URL: https://issues.apache.org/jira/browse/HADOOP-12788
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12788.001.patch, HADOOP-12788.002.patch
>
>
> {{OpensslAesCtrCryptoCodec}} uses random number generator, for example, 
> {{OsSecureRandom}}, {{OpensslSecureRandom}} or {{SecureRandom}} but it's not 
> clear which one would be loaded at runtime.
> It would help debugging if we can print a debug message that states which one 
> is loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12746) ReconfigurableBase should update the cached configuration

2016-02-09 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12746:
---
Attachment: HADOOP-12476.02.patch

Thanks for the review [~xiaobingo]. v02 patch Fixes the issue you pointed out. 
Also fixed a unit test failure and couple of checkstyle issues. The checkstyle 
indentation and brace warnings are bogus.

> ReconfigurableBase should update the cached configuration
> -
>
> Key: HADOOP-12746
> URL: https://issues.apache.org/jira/browse/HADOOP-12746
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch
>
>
> {{ReconfigurableBase}} does not always update the cached configuration after 
> a property is reconfigured.
> The older {{#reconfigureProperty}} does so however {{ReconfigurationThread}} 
> does not.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HADOOP-12788) OpensslAesCtrCryptoCodec should log what random number generator is used.

2016-02-09 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-12788 stopped by Wei-Chiu Chuang.

> OpensslAesCtrCryptoCodec should log what random number generator is used.
> -
>
> Key: HADOOP-12788
> URL: https://issues.apache.org/jira/browse/HADOOP-12788
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12788.001.patch
>
>
> {{OpensslAesCtrCryptoCodec}} uses random number generator, for example, 
> {{OsSecureRandom}}, {{OpensslSecureRandom}} or {{SecureRandom}} but it's not 
> clear which one would be loaded at runtime.
> It would help debugging if we can print a debug message that states which one 
> is loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-12788) OpensslAesCtrCryptoCodec should log what random number generator is used.

2016-02-09 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-12788 started by Wei-Chiu Chuang.

> OpensslAesCtrCryptoCodec should log what random number generator is used.
> -
>
> Key: HADOOP-12788
> URL: https://issues.apache.org/jira/browse/HADOOP-12788
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12788.001.patch
>
>
> {{OpensslAesCtrCryptoCodec}} uses random number generator, for example, 
> {{OsSecureRandom}}, {{OpensslSecureRandom}} or {{SecureRandom}} but it's not 
> clear which one would be loaded at runtime.
> It would help debugging if we can print a debug message that states which one 
> is loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12353) S3 Native filesystem does not retry all connection failures

2016-02-09 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-12353:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Very well then. I'm closing as won't fix. If someone wants to fix it really, 
please feel free to re-open this.
This is just a bad precedent. We are putting this code in releases, we admit 
there's a problem, we get patches from a new contributor (kudos Mariusz), and 
we just drop the ball on accepting them. 

> S3 Native filesystem does not retry all connection failures
> ---
>
> Key: HADOOP-12353
> URL: https://issues.apache.org/jira/browse/HADOOP-12353
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Mariusz Strzelecki
>Assignee: Mariusz Strzelecki
>Priority: Minor
> Attachments: HADOOP-12353.001.patch, HADOOP-12353.002.patch
>
>
> Current implementation of NativeS3FileSystem.java uses RetryProxy that 
> retries exceptions that may occur on network communication with S3 API, but 
> these exceptions must be exact instances of IOException:
> https://github.com/apache/hadoop/blob/master/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java#L349
> Our tests show that HttpClient throws IOException subclasses which are not 
> handled by Proxy.
> Additionally, not all methods that call S3 API are listed to be handled, i.e. 
> storeEmptyFile and retrieveMetadata are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12788) OpensslAesCtrCryptoCodec should log which random number generator is used.

2016-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139889#comment-15139889
 ] 

Hadoop QA commented on HADOOP-12788:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 51s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 53s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 46s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787148/HADOOP-12788.002.patch
 |
| JIRA Issue | HADOOP-12788 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b34a5eb5ff5e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HADOOP-12787) KMS SPNEGO sequence does not work with WEBHDFS

2016-02-09 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12787:

Affects Version/s: 2.6.3

> KMS SPNEGO sequence does not work with WEBHDFS
> --
>
> Key: HADOOP-12787
> URL: https://issues.apache.org/jira/browse/HADOOP-12787
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 2.6.3
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> This was a follow up of my 
> [comments|https://issues.apache.org/jira/browse/HADOOP-12559?focusedCommentId=15059045=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15059045]
>  for HADOOP-10698.
> It blocks a delegation token based user (MR) using WEBHDFS to access KMS 
> server for encrypted files. This might work in many cases before as JDK 7 has 
> been aggressively do SPENGO implicitly. However, this is not the case in JDK 
> 8 as we have seen many failures when using WEBHDFS with KMS and HDFS 
> encryption zone.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12787) KMS SPNEGO sequence does not work with WEBHDFS

2016-02-09 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-12787:
---

 Summary: KMS SPNEGO sequence does not work with WEBHDFS
 Key: HADOOP-12787
 URL: https://issues.apache.org/jira/browse/HADOOP-12787
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This was a follow up of my 
[comments|https://issues.apache.org/jira/browse/HADOOP-12559?focusedCommentId=15059045=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15059045]
 for HADOOP-10698.

It blocks a delegation token based user (MR) using WEBHDFS to access KMS server 
for encrypted files. This might work in many cases before as JDK 7 has been 
aggressively do SPENGO implicitly. However, this is not the case in JDK 8 as we 
have seen many failures when using WEBHDFS with KMS and HDFS encryption zone.

 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12785) [Handling exceptions] LdapGroupsMapping.getGroups() do not provide information about root cause

2016-02-09 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139324#comment-15139324
 ] 

Chris Nauroth commented on HADOOP-12785:


[~mbuzdov], thank you for reporting the issue.  Since you've already had to 
debug this in your environment, would you be interested in contributing a 
patch?  If so, instructions on how to contribute are here:

https://wiki.apache.org/hadoop/HowToContribute

(If you don't want to contribute a patch, then that's fine too, and someone 
else in the community will pick it up.)

> [Handling exceptions] LdapGroupsMapping.getGroups() do not provide 
> information about root cause
> ---
>
> Key: HADOOP-12785
> URL: https://issues.apache.org/jira/browse/HADOOP-12785
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
> Environment: _Operating system_: CentOS Linux 7 
> {color:gray}(7.1.1503){color}
> _Platform_: HDP 2.3.4.0, Ambari 2.1.2
>Reporter: Mukhadin Buzdov
>Priority: Minor
>  Labels: easyfix
>
> _CommunicationException_ and _NamingException_ are not logged in 
> _LdapGroupsMapping.getGroups()_.
> {code:title=LdapGroupsMapping.java|borderStyle=solid}
>   public synchronized List getGroups(String user) throws IOException {
> List emptyResults = new ArrayList();
> // ...
> try {
>   return doGetGroups(user);
> } catch (CommunicationException e) {
>   LOG.warn("Connection is closed, will try to reconnect");
> } catch (NamingException e) {
>   LOG.warn("Exception trying to get groups for user " + user + ": " + 
> e.getMessage());
>   return emptyResults;
> }
> //...
> return emptyResults;
>   }
> {code}
> {color:red}It is not possible to understand _LDAP_ level failures.{color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12779) Example usage is not correct in "Transparent Encryption in HDFS" document.

2016-02-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139350#comment-15139350
 ] 

Akira AJISAKA commented on HADOOP-12779:


+1, the fix looks good to me.
I found the usage of "hadoop key" is not documented anywhere, so we should 
document the usage in a separate jira. In addition, we need to document that 
uppercase is not allowed for key name.

> Example usage is not correct in "Transparent Encryption in HDFS" document.
> --
>
> Key: HADOOP-12779
> URL: https://issues.apache.org/jira/browse/HADOOP-12779
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.7.2
>Reporter: Takashi Ohnishi
> Attachments: HADOOP-12779.1.patch
>
>
> It says
> {code}
> # As the normal user, create a new encryption key
> hadoop key create myKey
> {code}
> But, this actually fails with the below error.
> {code}
> $ hadoop key create myKey
> java.lang.IllegalArgumentException: Uppercase key names are unsupported: myKey
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:546)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:504)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKeyInternal(KMSClientProvider.java:677)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createKey(KMSClientProvider.java:685)
> at 
> org.apache.hadoop.crypto.key.KeyShell$CreateCommand.execute(KeyShell.java:483)
> at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:515)
> {code}
> Though I'm not sure why it is so, I think the document should be fixed to use 
> only lowercase in key names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12746) ReconfigurableBase should update the cached configuration

2016-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140093#comment-15140093
 ] 

Hadoop QA commented on HADOOP-12746:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 8s 
{color} | {color:red} root: patch generated 56 new + 125 unchanged - 4 fixed = 
181 total (was 129) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 45s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 28s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 32s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 32s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 7s {color} | 
{color:black} 

[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections

2016-02-09 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139962#comment-15139962
 ] 

Zhe Zhang commented on HADOOP-12765:


Thanks for the work Min. Impressive results!

One quick comment is whether it's possible to code-share with current 
{{createDefaultChannelConnector}}.

> HttpServer2 should switch to using the non-blocking SslSelectChannelConnector 
> to prevent performance degradation when handling SSL connections
> --
>
> Key: HADOOP-12765
> URL: https://issues.apache.org/jira/browse/HADOOP-12765
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 2.6.3
>Reporter: Min Shen
>Assignee: Min Shen
> Attachments: HADOOP-12765.001.patch, blocking_1.png, blocking_2.png, 
> unblocking.png
>
>
> The current implementation uses the blocking SslSocketConnector which takes 
> the default maxIdleTime as 200 seconds. We noticed in our cluster that when 
> users use a custom client that accesses the WebHDFS REST APIs through https, 
> it could block all the 250 handler threads in NN jetty server, causing severe 
> performance degradation for accessing WebHDFS and NN web UI. Attached 
> screenshots (blocking_1.png and blocking_2.png) illustrate that when using 
> SslSocketConnector, the jetty handler threads are not released until the 200 
> seconds maxIdleTime has passed. With sufficient number of SSL connections, 
> this issue could render NN HttpServer to become entirely irresponsive.
> We propose to use the non-blocking SslSelectChannelConnector as a fix. We 
> have deployed the attached patch within our cluster, and have seen 
> significant improvement. The attached screenshot (unblocking.png) further 
> illustrates the behavior of NN jetty server after switching to using 
> SslSelectChannelConnector.
> The patch further disables SSLv3 protocol on server side to preserve the 
> spirit of HADOOP-11260.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12746) ReconfigurableBase should update the cached configuration

2016-02-09 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140138#comment-15140138
 ] 

Xiaobing Zhou commented on HADOOP-12746:


v02 LGTM, +1. Thanks.

> ReconfigurableBase should update the cached configuration
> -
>
> Key: HADOOP-12746
> URL: https://issues.apache.org/jira/browse/HADOOP-12746
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12476.02.patch, HADOOP-12746.01.patch
>
>
> {{ReconfigurableBase}} does not always update the cached configuration after 
> a property is reconfigured.
> The older {{#reconfigureProperty}} does so however {{ReconfigurationThread}} 
> does not.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140036#comment-15140036
 ] 

Hadoop QA commented on HADOOP-12782:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
16s {color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 15s 
{color} | {color:red} root-jdk1.8.0_72 with JDK v1.8.0_72 generated 2 new + 737 
unchanged - 2 fixed = 739 total (was 739) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 27m 10s 
{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 2 new + 733 
unchanged - 2 fixed = 735 total (was 735) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 33 unchanged - 2 fixed = 33 total (was 35) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 9s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 49s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK 

[jira] [Commented] (HADOOP-10865) Add a Crc32 chunked verification benchmark for both directly and non-directly buffer cases

2016-02-09 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140325#comment-15140325
 ] 

Masatake Iwasaki commented on HADOOP-10865:
---

{{Crc32PerformanceTest(8, 3, NativeCodeLoader.isNativeCodeLoaded())}} would be 
better. We don't need {{GenericTestUtils#assumeInNativeProfile}} in this case.

> Add a Crc32 chunked verification benchmark for both directly and non-directly 
> buffer cases
> --
>
> Key: HADOOP-10865
> URL: https://issues.apache.org/jira/browse/HADOOP-10865
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10865.002.patch, c10865_20140717.patch
>
>
> Currently, it is not easy to compare Crc32 chunked verification 
> implementations.  Let's add a benchmark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12769) Hadoop maven plugin for msbuild compilations

2016-02-09 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140263#comment-15140263
 ] 

Vinayakumar B commented on HADOOP-12769:


Thanks [~cnauroth] for the suggestions.

bq. Yes, CompileMojo was only implemented for *nix initially. However, I think 
we could achieve what we need by making changes only in CompileMojo so that it 
can call either make or msbuild, and then we wouldn't need MSBuildMojo. Doing 
it this way would minimize code divergence in the pom.xml files for the native 
vs. native-win profiles.
I will update it accordingly.

bq. I don't think the MSBuildMojo logic for checksumming the source to detect 
changes is necessary. The call to msbuild in hadoop-hdfs-native-client is 
already very fast if the source hasn't changed. The checksum comparison would 
completely prevent the call to msbuild, but I don't think that degree of 
optimization would save significant time.
Okay.



> Hadoop maven plugin for msbuild compilations
> 
>
> Key: HADOOP-12769
> URL: https://issues.apache.org/jira/browse/HADOOP-12769
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-12769-01.patch, HADOOP-12769-02.patch
>
>
> Currently, all windows native libraries generations using msbuild, happens 
> for every invocation of 'mvn install'
> Idea is to, make this as plugin, and make as incremental.
> i.e. Rebuild only when any of the source is changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10865) Add a Crc32 chunked verification benchmark for both directly and non-directly buffer cases

2016-02-09 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140314#comment-15140314
 ] 

Masatake Iwasaki commented on HADOOP-10865:
---

The patch still applies and I agree to add the benchmark. Just a few comments.

{noformat}
Running org.apache.hadoop.util.TestDataChecksum
Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.174 sec <<< 
FAILURE! - in org.apache.hadoop.util.TestDataChecksum
testCrc32(org.apache.hadoop.util.TestDataChecksum)  Time elapsed: 0.11 sec  <<< 
FAILURE!
java.lang.AssertionError: NativeCrc32 is not available
at 
org.apache.hadoop.util.Crc32PerformanceTest.(Crc32PerformanceTest.java:122)
at 
org.apache.hadoop.util.TestDataChecksum.testCrc32(TestDataChecksum.java:203)
{noformat}

The {{TestDataChecksum#testCrc32}} case should be skipped in non-native profile 
by calling {{GenericTestUtils#assumeInNativeProfile}}.


{code}
  public void testCrc32() throws Exception {
new Crc32PerformanceTest(8, 3, true).run();
new Crc32PerformanceTest(8, 3, false).run();
  }
{code}

Do we need the both? Running only {{Crc32PerformanceTest(8, 3, true)}} seems to 
be ok. It makes test time shorter.


> Add a Crc32 chunked verification benchmark for both directly and non-directly 
> buffer cases
> --
>
> Key: HADOOP-10865
> URL: https://issues.apache.org/jira/browse/HADOOP-10865
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10865.002.patch, c10865_20140717.patch
>
>
> Currently, it is not easy to compare Crc32 chunked verification 
> implementations.  Let's add a benchmark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2016-02-09 Thread Rashmi Vinayak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140359#comment-15140359
 ] 

Rashmi Vinayak commented on HADOOP-11828:
-

Hi [~jack_liuquan], [~drankye], [~zhz],

I am super excited to see this being resolved! Thank you all for the efforts 
that you put in. I agree with [~zhz] that it would be good to get some 
performance results comparing RS and Hitchhiker based on the new 
implementation. This would guide enterprises who are considering using erasure 
coding, and thus leading to a greater impact from this effort and HDFS-EC in 
general as they will come to know about this more efficient EC option. 

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Fix For: 3.0.0
>
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HADOOP-11828-hitchhikerXOR-V5.patch, 
> HADOOP-11828-hitchhikerXOR-V6.patch, HADOOP-11828-hitchhikerXOR-V7.patch, 
> HADOOP-11828-v8.patch, HDFS-7715-hhxor-decoder.patch, 
> HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction while retaining the same storage capacity and 
> failure tolerance capability as RS codes. This JIRA aims to introduce 
> Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12764) Increase default value of KMX maxHttpHeaderSize and make it configurable

2016-02-09 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140449#comment-15140449
 ] 

Aaron T. Myers commented on HADOOP-12764:
-

+1, the patch looks good to me.

Thanks for taking this on, Zhe.

> Increase default value of KMX maxHttpHeaderSize and make it configurable
> 
>
> Key: HADOOP-12764
> URL: https://issues.apache.org/jira/browse/HADOOP-12764
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
>Priority: Minor
> Attachments: HADOOP-12764.00.patch
>
>
> The Tomcat default value of {{maxHttpHeaderSize}} is 4096, which is too low 
> for certain Hadoop workloads. This JIRA proposes to change it to 65536 in 
> {{server.xml}} and make it configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10865) Add a Crc32 chunked verification benchmark for both directly and non-directly buffer cases

2016-02-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140391#comment-15140391
 ] 

Hadoop QA commented on HADOOP-10865:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 11s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 3 
new + 132 unchanged - 7 fixed = 135 total (was 139) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 11 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 17s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 27s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12746486/HADOOP-10865.002.patch
 |
| JIRA Issue | HADOOP-10865 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 55c0f8f9ce19 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12788) OpensslAesCtrCryptoCodec should log which random number generator is used.

2016-02-09 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140474#comment-15140474
 ] 

Uma Maheswara Rao G commented on HADOOP-12788:
--

{code}
+  if (LOG.isDebugEnabled()) {
+LOG.debug("Use " + klass.getName() + " as random number generator.");
+  }
{code}
I think the message should be like below?
 "Using "  + klass.getName() + " as random number generator." 

Otherwise +1, thanks for improving logging.

> OpensslAesCtrCryptoCodec should log which random number generator is used.
> --
>
> Key: HADOOP-12788
> URL: https://issues.apache.org/jira/browse/HADOOP-12788
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12788.001.patch, HADOOP-12788.002.patch
>
>
> {{OpensslAesCtrCryptoCodec}} uses random number generator, for example, 
> {{OsSecureRandom}}, {{OpensslSecureRandom}} or {{SecureRandom}} but it's not 
> clear which one would be loaded at runtime.
> It would help debugging if we can print a debug message that states which one 
> is loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12769) Hadoop maven plugin for msbuild compilations

2016-02-09 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139278#comment-15139278
 ] 

Chris Nauroth commented on HADOOP-12769:


bq. However, I believe, making cmake maven plugin support windows might be 
required.

Yes, {{CompileMojo}} was only implemented for *nix initially.  However, I think 
we could achieve what we need by making changes only in {{CompileMojo}} so that 
it can call either {{make}} or {{msbuild}}, and then we wouldn't need 
{{MSBuildMojo}}.  Doing it this way would minimize code divergence in the 
pom.xml files for the {{native}} vs. {{native-win}} profiles.

I don't think the {{MSBuildMojo}} logic for checksumming the source to detect 
changes is necessary.  The call to {{msbuild}} in hadoop-hdfs-native-client is 
already very fast if the source hasn't changed.  The checksum comparison would 
completely prevent the call to {{msbuild}}, but I don't think that degree of 
optimization would save significant time.

> Hadoop maven plugin for msbuild compilations
> 
>
> Key: HADOOP-12769
> URL: https://issues.apache.org/jira/browse/HADOOP-12769
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-12769-01.patch, HADOOP-12769-02.patch
>
>
> Currently, all windows native libraries generations using msbuild, happens 
> for every invocation of 'mvn install'
> Idea is to, make this as plugin, and make as incremental.
> i.e. Rebuild only when any of the source is changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)